Welcome to the peer-powered, weekend summary edition of the Law Technology Digest! Using PinHawk's patented advanced big data analytics, this summary contains the best of the best from the week. These are the stories that you, our readers, found most interesting and informative from the week. Enjoy.
'Kevin O'Keefe quoting Dave Winer caught me off guard this morning and made me think. Dave said, "Someday they will have AI actors delivering the nightly news and no one will notice." (Emphasis added) I'm not a huge news hound and given some of the newest leaps in AI video generation, I could easily see me missing a newscaster's replacement with an AI generated clone. Kevin goes on to chide law firms for the level and quality of their content which "appears to be legal summaries coming from cases, regulations, and whatnot." Kevin goes on, "You could have a law student do those summaries, or even AI, since it can summarize developments in case law or regulations." So what some law firms are already producing on their websites might have already been replaced by AI. Doug Austin's post is all about the "actress" Tilly Norwood. She isn't a real person, but a creation of AI. I haven't seen any of Tilly's videos, just still pictures, but I'm impressed. The art of video manipulation is amazing. While I know it wasn't AI generated per say, the preserum Steve Rogers in Captain America: The First Avenger was damn impressive! Could you be replaced by AI? Maybe the more interesting question is, Would anyone notice if you were? Read more at Real Lawyers Have Blogs: AI Can Write the Legal Summaries Most Lawyers and Law Firms Put Up on the Net eDiscoveryToday: Tilly Norwood is the New Actress Everyone is Talking About. Except She Isn't One: Artificial Intelligence Trends
'I have to admit that as my mother's dementia deepens, my fascination with the human mind increases. My mother cannot always remember her husband of 50 years died 15 years ago, or the names of her seven grandchildren. But she recognizes the sound of the blinker in my car (it's very loud) and knows I'm driving. What sticks in her memory (and why) is a fascinating topic. If you find the mind fascinating, then this is the post for you. The Monty Hall problem, metacognition, the paradox of choice are all explored as Dragan Rangelov discusses changing your mind. Before you change your mind A tip of the brain pan to Stephen Abram as you read more at The Conversation: What actually happens in your brain when you change your mind?
'Laura Lorek has a great post today (and I loved the "ChatGPT Usage By Occupation chart!). It talks about OpenAI and Anthropic and notes from a survey that "Specialized legal AI did not significantly outperform general AI in reliability or usefulness, but offers better workflow integration and context handling, especially within Microsoft Word, according to the study." One particular quote I found interesting was, "Law firms are providing attorneys with secure, licensed versions of AI tools, rather than public ChatGPT, to protect client confidentiality, said Greg Lambert, chief innovation officer with Jackson Walker in Houston. The firm uses four primary AI tools: Lexis+ AI, Westlaw Precision AI, Harvey.ai, and Microsoft Copilot." All of those tools are multi-modal wrappers to many different LLMs (OpenAI, Gemini, Anthropic, etc.). Are the AI decisions being made really more about the wrapper and NOT the LLM? I hate to say it, but my mind flashed back to my first law firm and our move off Wang VS systems and the computers I bought. This was the early days of PC compatibles and the decision on what brand of PC to buy came down to the BIOS and it's compatibility with IBM. The rest of it just came with the BIOS. (Note: I didn't buy Wang PCs because they had the worst compatibility.) I can now envision my next presentation on generative AI: I'll have AI generate me a background with a bunch of different, famous candy bars but with all the logos replaced by the legal IA vendors. What's your favorite wrapper? Read more at: Law.com: Lawyers Send Millions of Queries to AI Tools Despite Security Risks, Firm Bans
'Isn't it the perfect definition of hypocrisy that the man who wants OpenAI to be everywhere is telling European Commission antitrust enforcers that he's concerned about Apple, Microsoft and Google? Read more at Silicon: OpenAI Tells EU Regulators Of Antitrust Concerns
'So it's not just lawyers! I'm not sure whether that's a good thing or a bad thing. Deloitte charged the Australian government nearly $300K for a 273 page report, "Targeted Compliance Framework Assurance Review." Kyle Orland writes, "Shortly after the report was published, though, Sydney University Deputy Director of Health Law Chris Rudge noticed citations to multiple papers and publications that did not exist. That included multiple references to nonexistent reports by Lisa Burton Crawford, a real professor at the University of Sydney law school." Whoops! That's a small problem. "On page 58 of that 273-page updated report, Deloitte added a reference to 'a generative AI large language model (Azure OpenAI GPT-4o) based tool chain' that was used as part of the technical workstream to help '[assess] whether system code state can be mapped to business requirements and compliance needs.'" OK well disclosure is a good thing! "But Sydney University's Rudge told AFR that 'you cannot trust the recommendations when the very foundation of the report is built on a flawed, originally undisclosed, and non-expert methodology... Deloitte has admitted to using generative AI for a core analytical task; but it failed to disclose this in the first place.'" I'm not sure what kind of refund they are giving the Australian government, but I bet it is a fraction of the reputational damage they've just done to themselves. Talking about shooting yourself in the foot! Read more at ars technica: Deloitte will refund Australian government for AI hallucination-filled report
'As AI makes more and more inroads, it is having both positive and negative impacts. As the author writes, "AI systems can introduce unique risks, such as biasing decision outcomes, violating compliance guidelines, or exposing sensitive data. Without an AI impact assessment, issues like these might not surface until after they cause unacceptable financial, legal, and/or reputational damage." We all look at the potential impacts of new AI tools, but are we going far enough? Some of the considerations that a formal AI impact assessment looks at include:- Is the AI system functioning as intended and delivering the intended results? - Who or what could be impacted for better or worse by the AI system's outcomes and decisions-including errors? - What could go wrong if the AI system yields biased outcomes, generates invalid results, causes legal or compliance problems, creates cybersecurity vulnerabilities, etc.? - How best to measure and mitigate potential AI system risks? If your firm has conducted an AI impact assessment, drop me a line and let me know. Read more at Pivot Point Security Blog: What is an AI Impact Assessment and Does My Business Need One