- 'A Special Interview with Aderant CEO Chris Cartrett Recorded Live at Its Momentum Global Conference', , June 19, 2025
'
I'm doubling up on your Bob Ambrogi exposure this morning. First he reports from Aderant's Momentum Global Conference and then he gives some excellent insight into the strategic alliance between Harvey and LexisNexis. At Momentum, Bob interviews CEO Chris Cartrett, talking about their cloud approach and what grades he gives himself and his company (If you're an Aderant user, drop your grades in an email to me!) On the alliance between Harvey and LexisNexis, it appears that Harvey users are really getting the initial benefits with access to the LexisNexis' US primary law and Shepard's Citations. Future collaboration workflows and pricing were not mentioned. Read, listen and watch more at LawSites:
A Special Interview with Aderant CEO Chris Cartrett Recorded Live at Its Momentum Global Conference
Legal AI Platform Harvey To Get LexisNexis Content and Tech In New Partnership Between the Companies
'
- 'AI will continue to transform in-house legal departments in 2025', , June 16, 2025
'
If you work in a law firm and are looking to pass on this post, you'd be a fool to do so. Deloitte's report on AI for in-house legal, their predictions and thier thinking is a key piece in understanding what your clients will be doing and how what they do will potentially impact you. Deloitte lists "Moving from AI experimentation to AI value" as the top of it's 2025 predictions, writing, "While 2024 was the year of AI experimentation, 2025 will be the year where organizations will expect to see measurable return on investment. Legal departments will need to carefully consider how they develop business cases and measure benefits to demonstrate value." As 2025 is almost half over, I personally might say 2026, but regardless, the inevitability is creeping up on us. Don't be a Murdock, be sure to read more at Deloitte BLog: AI will continue to transform in-house legal departments in 2025
'
- 'Why We Quickly Forget So Much of What We Learn', , June 18, 2025
'
My wife would argue that my forgetting curve drops at a more phenominal rate then in the average chart. Even the average is not good. Back in the late 1800s, Hermann Ebbinghaus conducted the first studies that showed "...we forget about 50 percent of what we learn within an hour, 75 percent within a day, and up to 90 percent within a week." Yikes! Ruth Gotian offers some hope and provides some tips on how to beat the odds. I find that one her tips, "Writing down what you learn, in your own words, boosts memory and understanding," Works well for me. Don't forget to toss Stephen Abram a hat tip as you read more at Physchology Today: Why We Quickly Forget So Much of What We Learn
'
- 'AI is changing cybersecurity roles, and entry-level jobs are at risk', , June 18, 2025
'
When it gets to impacting employment, articles on artificial intelligence tend to either be on the Team "take your job" or Team "enhance your job." In the first post we've got CEO Amazon Andy Jassy telling his white collar workers that their jobs could evaporate over the next few years. Author Tom Jowitt adds to that story, adiditonal gloomy scenarios from the International Monetary Fund and the Institute for Public Policy Research. He also noted, as part of Team "enhance your job," Nash Squared, who "found that generative AI was not yet replacing jobs in the UK, but was being broadly deployed to support existing roles." In the second post by Sinisa Markovic, he notes data from that Wipro that shows the threat to entry-level security analysts. He writes, "AI systems can now perform a variety of tasks that were once handled by entry-level analysts, such as drafting reports, generating alerts, and assembling presentations for management." Read more at:
Silicon: Amazon Boss Admits AI Will Mean More Job Losses For Staff
HELPNETSECURITY: AI is changing cybersecurity roles, and entry-level jobs are at risk
'
- 'Stephen Abram', , June 16, 2025
'
It seems to me that we've barely gotten to harness the power of artificial intelligence before agentic AI became the new goal to shoot for. Now we've got Sabrina Ortiz's post introducing us to the concept of ambient agents. At a recent Cisco Live! Event, Harrison Chase, LangChain CEO and co-founder started talking about ambient agents, saying "...we define as agents that are triggered by events, run in the background, but they are not completely autonomous." The benefit he argues is scaling yourself. "Rather than 1:1 interactions between human employees and agents, ambience enables up to millions of agents to run in the background simultaneously. Instead of being limited to the number of chat windows you can use, you can instead rely on the agents to initiate their own, in response to environmental cues." I know there are times I wish I could be in two places at one time. Maybe ambient agentic AI will finally make me an army of one? Yet another hat tip to the amazing Stephen Abram as you read more at ZD NET: AI agents will be ambient, but not autonomous - what that means for us
'
- 'What Is AI?', , June 16, 2025
'
It might seem somewhat odd, writing a post in 2025 about what AI is. It has been around seemingly forever, getting bigger and consuming more and more of the discussion. Aren't we past asking what it is? Roger Grimes' post is worthy of contemplation today. He writes that his viewpoint is best described as: "I do not overly love AI. I do not hate AI. I like to think of myself as an AI realist." I'm with Roger on that. He struggles with defining AI and does not like the most common definition that "It is software that acts as a human would." He notes, "I have said it myself a hundred times without really thinking about it. But lately, I feel disingenuous saying it. AI is not human. It is software. It is software programmed by humans to do only what it is programmed to do. It does not think." Roger ultimately comes to the definition of AI as "...a general-purpose probabilistic pattern-matching engine." Follow his logic and consider along with him on "what makes AI AI" and "and everything else not AI" by taking some time to read more at Security Awareness Training Blog: What Is AI?
'
- 'Risk Reading - UK judge Warns Lawyer Misuse of AI Could Result in Life Prison Sentence, Settlement Non-disparagement Clauses Can Create Conflicts for Firms and Lawyers', , June 20, 2025
'
I never miss one of Dan Bressler's blog posts. As I read them and often think to myself "Interesting!, "They did what!?," "OMG!," and "I'm glad that's not MY firm!" I did a quick search on my previous newsletters and couldn't easily find confirmation, but wow, I think this may be yet another new record. "In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist." Not one. Not two. Not even a dozen, but 18 hallucinations filed with the court! Read more at Bressler Risk Blog: Risk Reading - UK judge Warns Lawyer Misuse of AI Could Result in Life Prison Sentence, Settlement Non-disparagement Clauses Can Create Conflicts for Firms and Lawyers
'
- 'The AI Law Professor: When your AI assistant knows too much', , June 19, 2025
'
My first reaction to Tom Martin's blog title was skepticism. I mean, how can a good assistant know too much? I started seeing the dark place he was going in the first paragraph with his HAL9000 reference. Human review teams exist in many generativeAI tools and the idea that "AI detected something in your query that triggered its safety protocols. Worse yet, it reports you to the authorities, and within minutes the FBI is knocking on your door to ask questions." is indeed real. And depending on the kind of legal work you're doing, you can indeed set off those triggers. But he drives his point home when he discusses Anthropic's Claude Opus 4 attempts at blackmailing its testers. Tom has convinced me. Yes Virginia, a good assistant CAN know too much. Read more at Thomson Reuters Blog: The AI Law Professor: When your AI assistant knows too much
'
- 'Richard Susskind on AI for Lawyers: A Review of 'How to Think About AI'', , June 20, 2025
'
Jerry Lawson reviews Richard Susskind's latest book, "How to Think About AI: A Guide for the Perplexed." I have to admit that given a choice between hearing Richard talk or reading one of his books, I would definitely go with listening. Jerry's review might have me getting another Susskind book to read. What is getting me to change my mind? " Susskind's contrasting of Henry Kissinger and Noam Chomsky's divergent AI viewpoints. The chapter on "Not-Us" thinking. The idea that AI can provide "quasi-judgment, quasi-empathy and quasi-creativity." I'd love to hear all about those in person, but I may well have to settle for the book. Be sure to read more at LLRX: Richard Susskind on AI for Lawyers: A Review of 'How to Think About AI'
'
- 'Who's guarding the AI? Even security teams are bypassing oversight', , June 20, 2025
'
This is disturbing. "According to the survey, 86% of cybersecurity professionals say they use AI tools, and nearly a quarter do so through personal accounts or unapproved browser extensions." Shadow IT is hard enough to combat and protect against when employed by end users. When IT staff does it, it's a million times worse. Read more at HELPNETSECURITY: Who's guarding the AI? Even security teams are bypassing oversight
'
- '11 Leadership Quotes From The Movies', , June 20, 2025
'
"Col. Nathan Jessep (Jack Nicholson) - A Few Good Men: You can't handle the truth!"
"Captain (Strother Martin) - Cool Hand Luke: What we've got here is a failure to communicate."
"Forest Gump (Tom Hanks) - Forest Gump: Stupid is as stupid does."
These were my favorites from Joseph Lalonde's list. If you love movies and fantastic leadership quotes, you'll read more at Joseph Lalonde Blog: 11 Leadership Quotes From The Movies
'
- 'OpenAI's Altman Hits Out At Meta's 'Crazy' Sign-On Bonuses', , June 20, 2025
'
I thought crazy signing bonuses were limited to the realm of professional sports, but apparently not. I asked Google what the largest sports signing bonus was and the AI summary reported, "The largest sports signing bonus in history belongs to Juan Soto, who received a $75 million signing bonus." If Meta is offering top OpenAI staff $100 million for jumping ship, that would be an even larger insanity. The world of artificial intelligence is insane, and this is just one more piece of proof. Read more at Silicon: OpenAI's Altman Hits Out At Meta's 'Crazy' Sign-On Bonuses
'
- 'Putting an AI Detector to the Test: Artificial Intelligence Trends', , June 19, 2025
'
While his beard isn't quite as flamboyant as Inspector Detector's, Doug Austin takes on the title as he tests ZeroGPT, an AI detector. He put it to the test against text he wrote as well as text generated by NotebookLM and ChatGPT. I won't spoil he results, bt will simply say that with the fast and furious advancements in AI, detecting AI is like a dog chasing its own tail. Read more at eDiscoveryToday: Putting an AI Detector to the Test: Artificial Intelligence Trends
'