Welcome to the peer-powered, weekend summary edition of the Law Technology Digest! Using PinHawk's patented advanced big data analytics, this summary contains the best of the best from the week. These are the stories that you, our readers, found most interesting and informative from the week. Enjoy.
'The Future of Life Institute has an interesting mission statement and while it's laudable, I'm not sure how there is much transparency on their new AI report card. "A panel of eight AI experts looked at the companies' public statements and survey answers, then awarded letter grades on 35 different safety indicators - everything from watermarking AI images to having protections for internal whistleblowers." That doesn't exactly explain how or why "The Claude and ChatGPT makers, respectively, get a C+, while Google gets a C for Gemini. All the others get a D grade, with Qwen-maker Alibaba bottom of the class on a D-." Tip your mortarboard to Stephen Abram as you read more at Mashable: AI safety report: Only 3 models make the grade
'I saw that Winston Weinberg and Gabriel Pereyra were on Reddit doing an AMA and I scanned some of the 2+ hours worth of content, but thankfully we have Bob Ambrogi who has poured through every bit of it. There is a lot of good stuff in Bob's post, but I'm gong to jump to the wrapper conversation. There have been many posts about the legal wrappers on general LLMs and the basic question of why pay more? Gabriel addressed this with the following six differentiaters: - Better AI for legal through a singular focus on this industry. - A goal to make entire legal teams (not just individual lawyers) more effective on matters. - Deep integrations with legal tech ecosystems and internal law firm systems. - Governance features such as ethical walls that firms and clients require. - A focus on making law firms more profitable as businesses. - Client collaboration infrastructure to allow firms and clients to securely share data and collaborate. As I read and reread the list, I'm not sure I'm fully sold. In particular, I've not read any text nor heard from any salespersons on Harvey's ROI and how it makes a firm more profitable. In part of Bob's summary, he writes, "What came through clearly in this AMA is that Harvey is betting on vertical depth rather than horizontal breadth, on workflow orchestration rather than just better models, and on the belief that legal tech penetration will expand dramatically beyond today's 3% of the total market."Read more at LawSites: Harvey Cofounders Answer Tough Questions in Reddit AMA: Valuation, Competition and the Future of Legal AI
'We were just talking about Harvey and wrappers in yesterdays newsletter and now we have Dentons saying they don't need no stinkin' wrappers! Patrick Shortall writes, "Dentons, the world's largest law firm by headcount, has announced a collaboration with OpenAI, the world's most valuable AI start-up." "Bugra Ozer, the firm's data science and AI governance lead, says it chose OpenAI not only because of its performance in the benchmarking exercise but also because of the scope to build products as desired, and immediate access to new models - by contrast with some third-party providers, he added, which could take up to two months to feed them through to customers. 'In our case, it is available the next day,' Ozer says, pointing out that access is sometimes weeks ahead of models' public roll-out." So much for the differentiaters Gabriel Pereyra listed for Harvey. Maybe we can get an ILTACON panel discussion with Gabriel Pereyra and Paul Jarvis or Bugra Ozer from Dentons to discuss the wrapper/no wrapper issue. Read more at legalit insider: Dentons enters partnership with OpenAI - Insights from data science lead Bugra Ozer
'If you adopt a dog, the differences between a Great Dane and a Chihuahua are obvious and pretty significant! But what about adopting a LARGE language module vs a SMALL one? Are the differences as significant? Lin Tian and Marian-Andrei Rizoiu address that issue in a great post today. I love the analogies, "What's better - a minivan or a sports car? A downtown studio apartment or a large house in the suburbs?" And the answer is what you might expect, "it depends on your needs and your resources." With AI, "The choice between small and large language models isn't about which is objectively better - it's about which better serves your specific needs." Read more at REAL KM: What are small language models and how do they differ from large ones?
'If you've deployed AI agents or are considering doing so, then you should read this post by Sinisa Markovic. NVIDIA and Lakera AI have released a safety and security framework designed to outline the risks and to "measure them inside real workflows." The paper is 44 pages and according to Sinisa, "The work includes a new taxonomy, a dynamic evaluation method, and a detailed case study of NVIDIA's AI-Q Research Assistant. The authors also released a dataset with more than ten thousand traces from attack and defense runs to support outside research." I've downloaded it but haven't yet had the opportunity to check it out myself. His conclusion, "The authors stress that static testing will not reveal every emergent risk in agentic systems. They argue that safety agents, probing tools, and continuous evaluators embedded in the workflow can give teams the visibility they need for safe deployment at scale. The dataset released with the study provides a large sample of real attacks and defenses, which the authors hope will support better research on agentic risk." Read more at HELPNETSECURITY: NVIDIA research shows how agentic AI fails under attack
'I've never been a fan of low code/no code tools, everyman programmers or what Kristin Calve refers to as "Citizen Developers." Professional coders have enough issues, let alone amateurs that have never been trained in databases, data structures or security. Kristin writes, "The risk isn't the creativity. It's the scale and invisibility. As Citizen Developers multiply, companies accumulate AI-driven systems they never designed, never reviewed, and often don't even know exist." She nails it - the core issue is security! When people don't know what they are doing, data gets altered, deleted or copied to locations it shouldn't be. Systems of truth become worthless. Shadow code, shadow apps, shadow databases are a huge risk to any organization. Keep this in mind as you read more at Corporate Counsel Business Journal: Citizen Developers Reshaping Work in Plain Sight
'While litigation between Thomson Reuters and ROSS Intelligence continues (and continues...), Bob Ambrogi is keeping his eye on it. He already reported on the 10 amicus curiae briefs filed in support of ROSS, "all arguing that the now-defunct AI legal research startup did not violate copyright law." Now apparently there are 9 amicus briefs filed in support of TR! Bob writes, "Those filing briefs range from major movie studios such as Disney and Paramount, to news media and copyright organizations, to individual copyright law professors, and even to TR's principal competitor LexisNexis." (Apparently McDonalds has not yet weighed in.) Bob used ChatGPT to help him extract information from the briefs and create the summaries he included. Not being the lawyer that Bob is, I might suggest we use a count method to decide who wins? Ross has 10 amicus briefs to TR's 9. Or maybe a page or word count would be more fair? Set aside some additional time to read more at LawSites: Film Studios, News Media and Even Competitor LexisNexis Among the Nine Amicus Briefs Supporting Thomson Reuters' Copyright Case Against ROSS
'Have you been thinking AI privacy policies have been getting, longer, more complex and almost impossible to understand? Sinisa Markovic is going to confirm your beliefs with his post referencing a new study, "A Longitudinal Measurement of Privacy Policy Evolution for Large Language Models." He writes, "Researchers looked at privacy policies from 11 providers and tracked 74 versions over several years. The average policy reached about 3,346 words, which is about 53 percent longer than the average for general software policies published in 2019." Just like Santa's naughty list, they keep getting longer! This is s fascinating look at how obtuse privacy policies have become. Forewarned is forearmed. Be sure to read more at HELPNETSECURITY: LLM privacy policies keep getting longer, denser, and nearly impossible to decode
'Billed as assisting with remote and hybrid work, Microsoft is planning some big brother/big sister additions to Teams. "Microsoft Teams may soon be able to reveal your work location. The app will display your building when you connect to Wi-Fi." A tip of your tin foil hat to Stephen Abram as you read more at ZD NET: Microsoft Teams may soon reveal when you start and leave work - here's how
'The Future of Life Institute has an interesting mission statement and while it's laudable, I'm not sure how there is much transparency on their new AI report card. "A panel of eight AI experts looked at the companies' public statements and survey answers, then awarded letter grades on 35 different safety indicators - everything from watermarking AI images to having protections for internal whistleblowers." That doesn't exactly explain how or why "The Claude and ChatGPT makers, respectively, get a C+, while Google gets a C for Gemini. All the others get a D grade, with Qwen-maker Alibaba bottom of the class on a D-." Tip your mortarboard to Stephen Abram as you read more at Mashable: AI safety report: Only 3 models make the grade
'There haven't been any recent rolling brownouts in California (natives can correct me, but I believe they now buy energy from neighboring states), but that coupled with the push for electric vehicles never made sense to me. How can you add all those electric vehicles to the grid when you can't let people run the ACs in their houses? Now it seems we should be more concerned with running AI instead of your air conditioners. Assuming the author has his or her facts correct, the numbers are staggering! By 2050, power demand in the United States will increase by 50% 70% of transmission lines in the United States are reaching the end of their life 55% of residential power transformers are nearing the end of their life Replacing the entire grid would cost close to $5 trillion Wish along with me that we had Mr Scott as read more at social media explorer: Can the United States Stay Ahead of the Power Demand of AI?
'I thought Adi Gaskell's post on when poor combinations was fascinating. He writes, "Creativity often means combining things that don't usually go together." "Sometimes these combinations work beautifully. But when they don't, they can do more than confuse people-they can actually make the individual parts seem worse than they are." The research, based on songs on an album, (and yes the research is old - 1998-2005) also hit home with me. I have a very eclectic music taste and I like individual songs, not typically albums (yes I am old too). Adi continues, "But when an album had songs that didn't fit well together, even good tracks were less likely to succeed. Why? Because people don't always judge parts separately from the whole. When a song is part of an album that feels messy, the song gets some of the blame-even if it's not the problem." He concludes, "In creative work, as in organisations, we often celebrate mixing things up. But this study is a reminder that combining ideas-or people-isn't just about adding good parts. It's about how those parts hang together." How do your lawyers "hang together?" How about your IT people? Be sure to read more at REAL KM: When good parts make a bad whole
'I think it was about ten years ago when Mattel released the Hello Barbie, capable of listening and recording conversations. Now, courtesy of Pebble's Eric Migicovsky, we have $75, AI-powered, Bluetooth-connected ring that can be used to purposely record conversations. No way a device like that could be abused or used for nefarious purposes, right? Read more at Silicon: Pebble Founder Launches $75 Smart Ring For Taking Notes
'If, for no other reason, I wanted to read Rebecca Hinds' and Robert Sutton's post to reduce my number of AI tensions to a mere five. Sadly they are more like five umbrella groups of tensions and I didn't have one on my list so now I've just added to my lot! Rebecca and Robert identify these tensions: - Experts vs. Novices - Centralized vs. Decentralized - Flatter vs. Taller Hierarchy - Fast vs. Slow - Top-Down vs. Peer-Driven Change I am not that familiar with Harvard or its medical school, but this story is spot on for the word of AI. "The challenge for leaders reminds us of an old (and likely apocryphal) story about a Harvard Medical School dean's warning to incoming medical students on the first day of school: 'Fifty percent of what we teach you will turn out to be wrong. The problem is we don't know which 50%.'" Which half indeed! Be sure to read more at Harvard Business Review: The 5 AI Tensions Leaders Need to Navigate