Welcome to the peer-powered, weekend summary edition of the Law Technology Digest! Using PinHawk's patented advanced big data analytics, this summary contains the best of the best from the week. These are the stories that you, our readers, found most interesting and informative from the week. Enjoy.
'Perhaps I'm getting jaded over lawyers submitting briefs with fake case cites, but darn if the medical field doesn't have more interesting hallucinations. It would seem Appellant Jonathan Karlen doesn't read the PinHawk (or has been under quite a large number of rocks) and didn't know what his legal duties were when he submitted the work of an online attorney consultant he hired and ended up submitting 22 fake case cites to the court. I'm guessing at this point that has to be a record number of fake cases. But even that doesn't really hold a candle to the publishing of "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway," a peer review article in Frontiers. The paper, submitted by three Chinese researchers, contained "gibberish text and, most strikingly, one includes an image of a rat with grotesquely large and bizarre genitals." The paper contains misspellings and other gibberish charts and graphs. Mr. Karlen got fined $10,000 for his mistakes, but clearly legal isn't the only industry with AI issues. We've just got more boring AI screw ups. Read more at: ars technica: Scientists aghast at bizarre AI rat with huge genitals in peer-reviewed article eDiscoveryToday: Appellant Fined $10,000 for Frivolous Appeal With 22 Fake Case Cites: Artificial Intelligence Trends
'Stephen Embry looks at two recent studies (LexisNexis and LawPay-MyCase) on the use of AI and Gen AI. They represent both ends of the spectrum of law - big and small. He declares it a tale of two cities, "The two studies show some key differences between how the very large and small firms view GenAI and its use. The bigger firms see some business opportunities, while the very small firms appear to be much more cautious in their approach. Use cases and concerns about use also differ." With the amount of money and people resources a big firm can throw at a project (any project), there will always be differences from very small firms. Read more at TechLaw Crossroads: From Big Law to Small Firms: A Tale of Two Cities in Embracing Legal AI
'I had frankly lost track of this case, but it's back in the news with Hewlett-Packard claiming it lost four billion dollars on its acquisition of Autonomy. HP wants its money back from former Autonomy CEO Mike Lynch and ex-CFO Sushovan Hussain. "Lynch has previously stated that the dispute stems from a misunderstanding of UK and US accounting rules." Hmmm not sure I buy that. Read more at Silicon: Hewlett-Packard Tells Court It Lost $4 Billion From Autonomy Acquisition
'Jared answers his post's question yes, where my initial thought is no. He talks with Erik Mazzone (Law Practice Management Advisor with Mazzone Works LLC). The conversation is interesting, it reminds me of the work I did for Clearspire so many years ago. The tech is a piece of this, but not all of it. It would seem possible for sole practitioners and firm models where you avoid the issue of partnership. Partnerships where everyone thinks they own 51% of the business and a tech company don't really mix well. Read and listen to more at ABOVE THE LAW: Should You Operate Your Law Firm Like A Tech Company?
'You get a plus one out of the gate if you knew what "pedagogy" meant without looking it up. Alice Keeler has a fascinating post about depth of knowledge. It is in the context of teacher and student, but I was drawn in nonetheless. I am curious if anyone has ever attempted to define DOK in legal? Please let me know if you have. With a hat tip to Stephen Abram, be sure to read more at Paperless Is Not a Pedagogy: #DOKchat - DOK is Hard Let's Talk About It
'I haven't given ChatGPT's short term memory issues any deep or serious thought until recently. Dealing with my mother and her failing memory has made me appreciate the value of a good memory. OpenAI is experimenting "with adding a form of long-term memory to ChatGPT that will allow it to remember details between conversations. You can ask ChatGPT to remember something, see what it remembers, and ask it to forget. Currently, it's only available to a small number of ChatGPT users for testing." I think this could be a fabulous move forward. I especially like that they are also building in a forget to deal with potential mistakes us humans might make. While I haven't found it to be a huge issue with my current level of AI usage, the repetitive nature of dealing with short term memories makes me think this is a huge plus for the human side of the AI equation. Read more at ars technica: OpenAI experiments with giving ChatGPT a long-term conversation memory
'If you needed any more proof that the industry is tripping all over itself in its rush to AI, look no further than the mess that is Google. Google has its issues and having heard from an insider, it's much worse than what you read about. "One week after its last major AI announcement, Google appears to have upstaged itself. Last Thursday, Google launched Gemini Ultra 1.0, which supposedly represented the best AI language model Google could muster-available as part of the renamed 'Gemini' AI assistant (formerly Bard). Today, Google announced Gemini Pro 1.5, which it says 'achieves comparable quality to 1.0 Ultra, while using less compute.'" I expect next week we'll hear about Bard Ultra Pro Max v4.3! Read more at ars technica: Google upstages itself with Gemini 1.5 AI launch, one week after Ultra 1.0
'Paul Hankin takes us to a "dimly lit jazz club, where a saxophonist weaves a tapestry of melodies, each note a spontaneous yet coherent part of a larger musical story." And then smoothly slides into "the intricate workings of an artificial intelligence (AI) model, churning through data, predicting the next word in a sentence with uncanny accuracy." I love his blending of jazz and AI. Whether you read more or not, you might just want to kick back and listen to this. But I would advise reading more at ILTA Blogs: Harmony in Prediction: The Syncopated Rhythms of Jazz Improvisation and AI Learning
'Stephen Abram finds us a research paper on AI hallucinations. It seems that Negar Maleki, Balaji Padmanabhan and Kaushik Dutta seem to be sticklers and want to clarify what AI does when it "hallucinates." I've highlighted before that confabulation or delusion are probably more appropriate descriptors. I've downloaded the 33 page paper to read laterThe authors "conduct a systematic review of the use of 'AI hallucination' across 14 databases with a focus on identifying various definitions that have been used in the literature so far (our review covers more fields than just healthcare and computer science, including ethical and legal settings, and domains as diverse as physics, sports, etc. in order to explore any broader issues). If you want to know more definitions of technological hallucinations, then be sure to read more at info DOCKET: Research Preprint: "AI Hallucinations: A Misnomer Worth Clarifying"
'I know I am a bit biased, but I do think the medical profession too quickly turns to drugs as the answer to all mankind's ills. It seems AI did better here. "When given information on fictional patients of varied depression severity, sex and socioeconomic status, ChatGPT mostly recommended talk therapy. In contrast, doctors recommended antidepressants." It seems that AI might be better at following the rules as well. "US, British and Australian guidelines recommend talk therapy as the first treatment option ahead of medication." Read more at REAL KM: AI can already diagnose depression better than a doctor and tell you which treatment is best
'The minions being behind the AI produced art makes a strange amount of sense when you think about it. But poking fun of the current state of AI happened a lot during the Super Bowl. For a replay of some of the best, be sure to watch and have some fun at ars technica: The Super Bowl's best and wackiest AI commercials
'I never experienced a near death time dilation like Ruth Ogden describes in her post today. I have experienced reading a great science fiction book and seemingly only minutes have passed as well as a dry college text where each page seems to take an hour to slog through. A self professed "time nerd" (I could only qualify for this if being a Dr. Who fan was acceptable), she's done serious research exposing people to electric shocks, working through 100-metre-high crumbling bridges (in VR) and being isolated in Antarctica for a year. (Not sure I want to call myself a time nerd any more!) It is an interesting exploration as to whether we can learn to harness our own emotions to harness the brain's ability to distort time. Wouldn't that be amazing! A hat tip to Stephen Abram as you read more at INVERSE: Our Brains Completely Distort How Time Actually Happens - Here's How To Take Advantage Of It