Welcome to the peer-powered, weekend summary edition of the Law Technology Digest! Using PinHawk's patented advanced big data analytics, this summary contains the best of the best from the week. These are the stories that you, our readers, found most interesting and informative from the week. Enjoy.
'You may recall I highlighted the dispute between the Standards for the Legal Industry Alliance (SALI) and Advancement of Legal and Ethical AI (ALEA) in my post, This is why we can't have nice things. But thanks to the efforts of the Legal Marketing Association (LMA) we CAN have nice things again. Seeing the value of the legal matter standards, LMA has stepped in and brokered a deal which ends up merging all three associations. The mission statement of SALADS, the conjoined organizations, is impressive. With no more infighting we can now get the standards we all deserve! I have heard a rumor that Bob Ambroji has been approached to be CEO of the combined organization but he has not yet responded to my email inquiry. Be sure to read more at LawSites: Battle over standards resolved
'Katrina Pugh, Jonathan Ralton, Andrew Trickett, Marc Solomon, and Eve Porter-Zuckerman warn us about the risks of AI as it relates to KM: "inaccuracy, manipulation, compromised learning, and social isolation." In a recent paper they discuss how valuable the role of knowledge managers in taming AI: "we explore how knowledge managers (KM'ers) can intervene successfully with collective sense-making, example-selection, and a creative AI discernment. We call those three moves 'collectivity,' 'nostalgia,' and 'selectivity.'" This is a must read for any pinion in the intersection of KM and AI: REAL KM: Knowledge managers bring collectivity, nostalgia, and selectivity to AI
'This post ordinally came out in March, but I think it bears rehighlighting because it is so good. The average pinion struggling to use AI will get a lot out of the tips -from mnemonics (take your pick: ICE, RISEN, COSTAR) to avoiding the "AI Accent," these are great, shareable tips. Kudos to the Traveling Coaches author, whoever it was. Read more at Traveling Coaches: 6 Tips for Copilot
'In his post today, William Josten takes on the American Bar Association (ABA) Standing Committee on Ethics and Professional Responsibility Formal Opinion 512. Throwing some additional light on costs and charging of generative AI by lawyers, the issue of how much and what you can charge for and such is at the forefront of this post. One interesting quote, "if using a [GenAI] tool enables a lawyer to complete tasks much more quickly than without the tool, it may be unreasonable under Rule 1.5 for the lawyer to charge the same flat fee when using the [GenAI] tool as when not using it." As William points out, "This creates a potentially massive loophole and frankly, an opportunity for after-the-fact regrets." This is one of the most interesting posts about costs and genAI I have read in the past few years. Don't run away and hide, but instead turn the speakers on and listen as you read more at Thomson Reuters Blog: Is ABA Formal Opinion 512 off the mark? And if so, what can law firms and GCs do about it?
'I think I'm having a bit of a "get off my lawn" moment with this post. Josh Zylbershlag, Director of E-Discovery Services at Paul, Weiss, Rifkind, Wharton & Garrison won the Legalweek Leaders in Tech Law Award for E-Discovery Technology and Innovation in a law firm, which I am sure was justly deserved. But the quote, "Innovation mindset must be the rule," gets to me. Sure some firms are more innovative than others. Some firms have C-level innovation people, others have entire innovation departments. I've been in legal long enough that I can recall when law firm IT was like a Ronco Showtime Rotisserie Oven where you "Set it and forget it." Tech was bought, installed and rarely upgraded. But those days and the people who embraced that approach are gone. The idea that when it comes to technology, innovation and forward progress is NOT the norm grates me. Every IT person I know is innovative in one way or another. Let me know if you agree or disagree with me after you read more at Legaltech news: Paul Weiss' Director of E-discovery Services: An 'Innovation Mindset Must Be the Rule'
'In his blog post today, Javvad Malik talks DORA, or the Digital Operational Resilience Act (DORA). The goal of DORA is to strengthen the cybersecurity posture of banks, insurance companies and investment firms in the EU. Javvad writes, "To do this, the regulation looks to standardize how financial entities report cybersecurity incidents, test their operational resilience, and manage third-party risk." But of course, like with any good "standard," he notes "its implementation and enforcement vary from country to country." If you want to learn more about DORA and her variants, be sure to read more at Security Awareness Training Blog: Exploring the Implications of DORA: A New Global Standard For Financial Cybersecurity
'And because we all need a depressing apocalyptic post to head into the weekend, we have this gem by Ryan Whitwam. I appreciate Ryan's note that "Unfortunately, we don't have anything as elegant as Isaac Asimov's Three Laws of Robotics," as I have thought since the beginning that AI developers should have integrally developed something like that as a safeguard. Instead what we have is bolt-ons. The researchers at DeepMind have a paper (108 pages before references and 145 pages with) that identifies four categories of risk, misuse, misalignment, mistakes, and structural. I haven't done anything other than download the paper to read this weekend, but it seems we have less than five years to digest this paper and plan accordingly. Listen to the words of Zager And Evans as you read more at ars technica: DeepMind has detailed all the ways AGI could wreck the world
'I know Benjamin Joyner is referencing the report from Factor, "GenAI in Legal Benchmarking Report 2025," but I think the struggle to calculate ROIs for AI are not limited to in-house legal teams. I've been pointing out that much of the posts on legal AI are more smoke then substance and that the actual cost of AI tools seems to be conveniently left out of each and every one of them. Now granted, when you're talking about purchasing in general, the fact that legal departments are an expense and law firm lawyers are the revenue generators is absolutely a factor in deciding what to spend. But I think we can all benefit from reading more at Legaltech news: In-House Teams Struggling to Take Legal AI Beyond the Trial Stage
'Theda Snyder's post applies to everyone, IT techs in addition to the lawyers she specifically references in her post. On the first reading of the title I focused on "when you know too much" and thought to myself, there have been few times in my life when I've actually felt that way. But when you factor in the communications part and her noting, "When someone is expert on a subject, they tend to forget to explain the basics," it all clicked into place. From time to time I have been guilty of skipping over some of the basics because of my expertise in some areas. I think as you grow and mature your communication skills, it becomes easier, but those skills are something you continue to refine over your lifetime. I have had to ask both members of my staff and lawyers to stop - backup - and give me more basic details. Everyone can benefit from reading this post, so share it far and wide. attorneyatwork: The Curse of Knowledge: Effective Communication When You Know Too Much