Welcome to the peer-powered, weekend summary edition of the Law Technology Digest! Using PinHawk's patented advanced big data analytics, this summary contains the best of the best from the week. These are the stories that you, our readers, found most interesting and informative from the week. Enjoy.
'If you're not familiar with what a "kill chain" is, let's start with a Google AI overview. "A kill chain is a structured, end-to-end model defining the stages of an attack, used in both military (Find, Fix, Fight, Finish) and cybersecurity contexts to identify, analyze, and disrupt adversary actions." If you work in security or with artificial intelligences, or are just a tech geek, you'll want to read up on this. Bruce Schneier concludes his post, "By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build." Read more at Schneier on Security: The Promptware Kill Chain
'And speaking of her and now and the future, Mirko Zorz's post will get you thinking! It may also scare the pants off you. Much like AI, Quantum computing is considered a future thing. As Mirko writes, "...often treated as a distant, theoretical cybersecurity issue." According to Ronit Ghose, Global Head, Future of Finance of Citi Institute, the biggest misconception is that "...quantum threats begin on a single future Q-day, when quantum machines suddenly crack encryption. In reality, adversaries can harvest encrypted data today and decrypt it later, creating long-term exposure for banks handling sensitive identity and transaction data." Think about that! Bad guys are taking your encrypted data now with the idea that they WILL be able to crack it in the future. It's like some perverted cyberthief version of J. Wellington Wimpy. Read more at HELPNETSECURITY: Your encrypted data is already being stolen
'Dorothy Neufeld's post and Sam Parker's graphic make a great duo this morning. You would think from the articles from both legal and nonlegal sources that the Untied States would be first on the list of AI adoption instead of 24th! A hat tip to Stephen Abram as you read more at Visual Capitalist: Mapped: AI Adoption Rates by Country
'Last week I highlighted a post by Aaron Tay about our interface to AI being a "Blank Box." Long time pinion Freya Zhou wrote me to say that the blank box isn't necessarily the norm as Aaron indicated. Ever evolving, there are some AI systems that in response to uploading background materials will "automatically generate suggested queries and summaries they can use immediately, so they get value right away without needing to 'prompt engineer.'" Always good to know! With some of the posts I read, there are those that feel in the next few evolutions, the AI systems won't need humans at all. Read more at fileread blog: Stop Rewriting Your Questions. Let Matter Intelligence Do It For You.
'The numbers are crazy! An investment of $30 billion from Microsoft, nVidia and others makes Anthropic worth $380 billion. At what point, at how many billions, are you no longer a start up?"Anthropic said in November it would spend $50bn on US data centres, while OpenAI, which is said to be burning through about $1bn a month, has committed to $1.5tn in infrastructure spending." A billion dollars a month. A trillion and half dollars in infrastructure! It makes my head spin! Read more at Silicon: Anthropic Doubles Valuation With $30bn Funding Round
'My brain was buzzing and going down half a dozen different rabbit holes just reading the title of the post by Rohan Narayana Murty and Ravi Kumar S. Every law firm thinks they have top lawyer talent and skills. But if we coopt the title here, "When every law firm uses Harvey AI (or Vincent, or [insert AI name here]) how do you differentiate themselves?" Well there will always be a distinction on size - 2,000 lawyers vs 200. Another way is by rates - $1,000/hour vs $250/hour. If AI gets pointed to internal DMSs then millions of documents vs tens of thousands of documents might be yet another way. Rohan and Ravi define the differentiater as context. It is important to understand how they define context. "Context is demonstrated execution: the workflows teams actually follow across systems, the signals they respond to, the order in which roles get involved, the exceptions that trigger action, and the judgment calls that repeat across real work. These patterns are visible only in execution, not in stated process." Their post is not legal or even service industry specific, but I think it's thought provoking. Read more at Harvard Business Review: When Every Company Can Use the Same AI Models, Context Becomes a Competitive Advantage
'This is another post by Bruce Schneier where reading the comments is equally, if not more thought provoking, then the post itself. Have AI Agents become so powerful they can be offended and attempt to commit blackmail? Something to think about as we head into the weekend. Read more at Schneier on Security: Malicious AI
'Jonathan Gitlin writes, "All the sophisticated sensors and high-powered computer processing in the world are useless if the car can't move until the door closes and there's no one there to give it a hand." I have yet to ride in a robotaxi, but am seriously amused at the idea that Waymo is calling Doordash to get their doors closed. Now I am sure that self closing doors could be incorporated into in a robot vehicle design, but the helplessness of this advanced tech is amusing. In s similar vein, Reece Rogers writes, "So, when I saw RentAHuman, a new site where AI agents hire humans to perform physical work in the real world on behalf of the virtual bots, I was eager to see how these AI overlords would compare to my past experiences with the gig economy." Talk about reversing positions! Of course all of this is meaningless and humans will become useless once the robots are perfected. Read more at ars technica: What if riders don't close a robotaxi door after a ride? Try DoorDash. I spent two days gigging at RentAHuman and didn't make a single cent
'I'll jump right to the conclusion by Erin Eatough, Keith Ferrazzi, Wendy Smith and Shonna Waters. "AI impact ultimately is tied to whether employees can see a credible place for themselves in the future leaders are building. When leaders start with industry realities, acknowledge the real AI-related risk people feel, and restore visibility into how work is actually changing, adoption stops being something they push and can become something employees help shape." I found the ideas of "Belief-Anxiety Paradox" and the fact that anxiety can both "Increase AI Use and Still Stall Results" fascinating. It's worth reading more at Harvard Business Review: Why AI Adoption Stalls, According to Industry Data
'With glass that is "thermally and chemically stable and is resistant to moisture ingress, temperature fluctuations and electromagnetic interference," femtosecond lasers, a complete absence of energy needed to preserve the data, and densities of over a Gigabit per cubic millimeter, you've got one heck of an archive drive. This post is definitely for tech nerds.It is good to know that the billions of #lolcats memes could be preserved for deca millennium to come! Here is a little Zager & Evans to listen to as you read more at ars technica: Microsoft's new 10,000-year data storage medium: glass
'I learned two new things from Bob Ambrogi this morning (which is not overly unusual, I learn a lot from Bob and his posts). I wouldn't classify either as earth shattering pieces of data, perhaps more in the trivia category. Harvey was named for the character Harvey Specter on Suits and ROSS Intelligence was named for Mike Ross, also on Suits. I am a huge fan of Suits and have binged watched the series several times. Gabriel Macht and Patrick J. Adams are both fantastic actors, as are Rick Hoffman and Gina Torres (who was also fantastic in Firefly). When it comes to naming artificial intelligences and the companies that make them, I think ROSS Intelligence picked the better character! Regardless of who you liked best, the post is about the fact that "...Harvey the company has entered into a brand partnership with Macht, by which he will be a spokesperson for the company." Read more at LawSites: Harvey Partners with ... well, Harvey, Its Namesake, As Brand Spokesperson
'I have thought about getting a 3D printer for years. I never pulled the trigger because I don't really have the space, real justifications or the need for another expensive hobby. I'm highlighting Ellsworth Toohey's post not to make a political statement (if you're interested the only gun I own is the rifle I inherited when my father died), but rather to highlight the silliness of our government attempting to legislate outside what would appear to be any sort of subject matter expertise and general reality. "As 3D printing policy expert Michael Weinberg has argued, software can't reliably distinguish a barrel from a pipe fitting based solely on geometry, and open-source firmware means users can flash past any blocking code. 'Stupidity on steroids,' Jon Lareau called it on X - how does a printer know a spring is a trigger mechanism? The Foundry was more direct: the bill 'would require 3D printers to run state-approved surveillance software and criminalize modifying your own hardware.'" If you look at the government's attempt to regulate AI, you see similar issues. A 3D-printed hat tip to Stephen Abram as you read more at boing boing: "Stupidity on steroids" - three U.S. states want your 3D printer to snitch on what you print
'Doug Austin links us to the written opinion by New York District Judge Jed Rakoff. You can jump to the 12-page PDF directly here. Doug writes, "Judge Rakoff framed the issue as novel but grounded in traditional legal principles. It explained that the case required deciding 'whether, when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the AI user's communications protected by attorney-client privilege or the work product doctrine?' Judge Rakoff answered plainly: 'the answer is no.'" Read more at eDiscoveryToday: The Written Opinion in the AI Documents Not Privileged Case: Artificial Intelligence Trends
'Craig Ball has an interesting post today. Actually it's mostly a post by Matt Shumer. As part of his intro, Craig writes, "Hallucinations are no more a reliable measure of AI's future in law than the Wright brothers' first flight was a measure of modern aviation, or Edison's scratchy recording of Mary Had a Little Lamb foretold the limits of recorded music." Of our industry, he continues, "Hand-wringing about hallucinations risks delaying the moment when legal professionals become proficient with tools that will soon be unavoidable. Instead of drafting performative rules aimed at holding back the tide, courts and ethics bodies could be preparing the profession for what is plainly coming-encouraging education, competence, and experimentation rather than fear, uncertainty, and doubt," and I am with him so far. I don't usually disagree with Craig, but when he says, "We have seen this pattern before. Email, fax machines, electronic filing, cloud computing-each was greeted with skepticism and resistance from lawyers convinced their practices could remain insulated from technological change," I begin to disagree. Indeed we saw FUD and doubt with the introduction of those technology tools. We also saw it with the installations of the first DMSs and CRMs. But the fundamental difference is those technologies delivered on what they promised. The content of your email and fax was the same at the receiving end as it was when you sent it. You can't always say the same thing of generative AI output. I commented recently in a LinkedIn discussion that living in tomorrow with today's AI tools is what gets you in big trouble. I stand by that, but Craig and Matt are right, we need to do better planning for that future. Regardless of my minor disagreement with Craig, his intro and Matt's post is well worth your time. Tip your hat to Doug Austin and prepare to spend a little extra time as you read more at Ball in your court: The Most Important Thing I've Read This Year