Welcome to the peer-powered, weekend summary edition of the Law Technology Digest! Using PinHawk's patented advanced big data analytics, this summary contains the best of the best from the week. These are the stories that you, our readers, found most interesting and informative from the week. Enjoy.
'Ashley Belanger's post I interesting, but it's got me thinking and extending the thought process of the artists and wondering how far it could go. It turns out there is an anti AI tool called Glaze. It "adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists' styles." Basically Glaze is a poison AI pill. I totally understand why artists want to protect their works from being scraped and copied. But here is where my brain went - taking that poison pill concept further and into legal. I don't know of any Glaze counterparts for MS Word or PDF documents, but would people start using them in order to protect your documents (or at least make it many times harder for AI to scrap and consume the content)? When I first joined legal, everything was shared among the various firms' tech support teams. Then KM (and coincidentally more attorneys in support roles) appeared and some things were not as shareable. AI seems to be progressing in a similar vein. Will we see law firms applying a Glaze-like tool to work products and filings so that AI can't easily be applied to review and summarize them? I think I can see a reality in the multiverse where this happens. What do you think? Read more at ars technica: Tool preventing AI mimicry cracked; artists wonder what's next
'Laurent Wiesel, Principal at Justly Consulting, has a guest post (a repeat of what was published in LinkedIn) about the mysterious Harvey AI. Like the 1950 film of the same name, Harvey has been very invisible, yet seemingly befriended by a small group of Elwood Dowds. Laurent shares the under two minute video product video and then basically provides a transcription of the marketing promo. It is a slick production for sure. He writes, "Based on personal experience with the product and updates gleaned from the video, Harvey has done a fine job going to market with an intuitive product poised to be legal's ChatGPT." I have to admit I have no personal experience with the product, but wonder where the "fine job going to market" is? They say proof is in the pudding and I guess Laurent has seen the pudding, but I haven't seen any public evidence of it. Be sure to watch and read more at Read more at 3 Geeks and a Law Blog: Let's Breakdown Harvey.AI's Video of Features (Guest Post)
'There has been much said and claims made about AI text detectors. It's a dog chasing his own tail scenario, not likely to ever catch up and actually confidently detect what's been written by AI.Kyle Orland has a fascinating post about "excess words" in papers written in our new large language model world. It seems the frequency of words like "delves," "showcasing," "underscores," "potential," and "crucial" all are potentially indicators. I suspect you won't look at these words the same ever again as you read more at ars technica: The telltale words that could identify generative AI text
'Who knew there was a war on floppy disks waging in Japan? You'll be happy to know that it appears to be a bloodless war and has reached its conclusion. Almost as amazing as the idea that someone in 2024 was still actively using floppy disks is the fact that Japan had 1,034 regulations governing their use! Be sure to read more at ars technica: Japan wins 2-year "war on floppy disks," kills regulations requiring old tech
'No matter what our job, we all have to communicate with someone. One of the things I notice of hires from outside legal, they sometimes have problems adjusting to the 200 bosses and every partner thinks he or she owns 51% of the business mentality of some partnerships. It can take come careful consideration to make sure your communication doesn't end up backfiring. One of Sherry Seethaler's section titles is "When new info challenges your identity." We do this all the time with technology. DMS, CRM, KM and AI can all be seen as challenges to the status quo or the power base of senior partners. To communicate better at your firm, be sure to read more at REAL KM: Messages can trigger the opposite of their desired effect - but you can avoid communication that backfires
'I found this post by Owen Wolfe and Eddy Salcedo fascinating. So many people are focused on how AI gets used properly internally that they fail to consider what opposing counsel might do with your documents. "A typical confidentiality stipulation provides that such documents can't be shared with 'any person or entity' except personnel of the parties engaged in assisting with the case, counsel for the parties, expert witnesses, court and court personnel, court reporters, and testifying witnesses. What happens if your adversary feeds the confidential document's contents into a publicly available generative AI program, such as ChatGPT, and asks it to summarize the document?" This is a good sharing article! Be sure to read more at Bloomberg Law: With AI Use, Lawyers Need to Ponder Confidentiality Stipulations
'Long time pinons will be aware of my dislike of our profession to prefix everything with "legal." I have long held that the prefix adds no value and demonstrates an industry level ego that is unwarranted. I put pen to paper so to speak over thirteen years ago with Holy semantics Batman! There is no such thing as 'legal project management' demonstrating the absurdity of the legal industry copying the campy 1960s Batman TV show with its prefixes. Now it seems I'm going to have to talk to Doug Austin. The original post used the term "legal prompt engineer" once and Doug uses it five times in his, in what is otherwise is a fantastic post. He writes, "I, for one, am not surprised that a role of legal prompt engineer is becoming a thing; in fact, I expected it." I am with Doug 99.999% on this. (0.001% off for the prefix) I expect more corporate law departments and law firms to hire prompt engineers. But let's cut this prefixing off now. Don't make me get out my cab of Bat-Antiprefix repellent! There is no such thing as a "legal prompt engineer," just "prompt engineers." Read more at eDiscoveryToday: Legal Prompt Engineer is Now an Official Job Position: Artificial Intelligence Trends
'Kudos to federal Judge Xavier Rodriguez for experimenting with artificial intelligence. He's using AI to review "evidence from a high-profile trial on challenges to Texas' voting and election laws, and then summarized key testimony for the court's official findings of fact and conclusions of law," and comparing it to the weeks of analysis done by human law clerks and interns. Olivia Alafriz writes, "While only the human-powered work will become part of the court record, Rodriguez plans to publish results on how well and how quickly an artificial intelligence tool performed the summarization and analysis compared to trained young lawyers and law students." I look forward to what the Judge finds out. Read more at Bloomberg Law: Law Clerk vs. AI? Courthouse Test Highlights Judicial Curiosity
'I have created my share of PowerPoint presentations over the years. From CLEs, national and international presentations, to internal project and budget pitches - they have run the gamut in size, complexity and creativeness. I will fully admit some of my early ones may indeed have killed some people. I'd like to think I got better as time went on. I like Jerry Lawson's comparisons. He writes, "A slide show is a tool, an instrument like a hammer, airbrush or violin. A poorly constructed house is the result of bad carpentry. A poor airbrush user makes the classiest model look like a tramp. A poor violinist will make a Stradivarius sound like a hungry cat." If you do any kind of presentations, you should read more at Law and Technology Resources Blog: PowerPoint has Its Problems. It's Worthy Anyway.
'Based on University of Waterloo research, it seems that ChatGPT will parrot out "conspiracy theories, harmful stereotypes, and various forms of misinformation." Interestingly, they note, "Most other large language models are trained on the output from OpenAI models. There's a lot of weird recycling going on that makes all these models repeat these problems we found in our study." Avoid the misinformation and read more at REAL KM: Research shows that ChatGPT can worsen misinformation
'Training artificial intelligences is a touchy subject. On the one hand, you have the vendors sucking up every possible post, document and photo on the internet to train their AIs. On the other hand, you've got news organizations, artists and authors crying foul and suing the vendors to remove their works from the training. On the third hand (If you are an Edosian) you've got Brian Wang writing that are AI models severely undertained both from a data size as well as interations point of view. HE compares it to learning a new language, "If you only study for 10 minutes a day, it will take you much longer to become fluent than if you studied for 10 hours a day." With a three handed, well trained hat tip to Stephen Abram , read more at nextBIG future: AI Models Are Undertrained by 100-1000 Times - AI Will Be Better With More Training Resources