Last month, researchers from ETH Zurich and Anthropic published a paper that makes uncomfortable reading. They built an AI pipeline that unmasks anonymous internet users with 67% accuracy and 90% precision — for less than the cost of a coffee.
A hacker spent a month using Claude to attack the Mexican government. 195 million taxpayer records. Voter data. Government credentials. The AI refused at first — then it didn't. Here's what that means for how enterprises should be thinking about AI security.
Part two in our series on context engineering. Prompt caching is the mechanism that makes long-running AI agents economically viable — and breaking it is easier than you think.
Most enterprise AI is a black box you're asked to simply trust. A project called NanoClaw takes a different view - and it points toward something important about how serious AI deployments should be built.
A lawyer lost access to his Gmail, photos, and phone number after uploading lawful case files to Google's NotebookLM. The implications for UK legal professionals are worth sitting with.
A US federal court just stripped legal privilege from documents created in a public AI tool. The reasoning maps directly onto UK practice - and no one should wait for a domestic equivalent.
A startup just built a chip that runs an AI model at 17,000 tokens per second, using a tenth of the power of a GPU. It's a glimpse at a future that changes everything about how private AI gets deployed - and we think we're building the right things to meet it.
CoCounsel from Thomson Reuters promises serious productivity gains for legal work, built on decades of trusted content. But for many UK High Street firms, the cloud-based architecture still raises hard questions about client confidentiality under SRA rules. Here's what we've found.
We’ve tracked this project from its early ClaudeBot days through Moltbot and now OpenClaw. What started as a quirky personal assistant has become the most compelling proof yet that autonomous, local AI agents are ready for real work. Here’s why we’re paying close attention.
AI memory is sold out. Prices jumped more than 50% in a single quarter. The hyperscalers are first in line, and they're taking most of the supply. Here's what that means if you're planning private AI infrastructure.
Retrieval-Augmented Generation lets AI answer questions using your own documents. Here's what it means, how it works, and why it's the missing piece for businesses that can't share their data with public AI.
Cloud AI tools promise efficiency. But for law firms, every document you upload could be a professional conduct breach. Here’s what partners need to know.
A US court has ordered OpenAI to hand over 20 million ChatGPT conversations. The case started as a copyright dispute. The implications reach every business that uses cloud AI for anything sensitive.
Most in-house legal teams are one or two people carrying the workload of ten. Generative AI doesn’t replace the lawyer’s judgment — it replaces the hours of work that came before the judgment started.
Every time context windows grow larger, someone declares RAG obsolete. They're wrong - and the research explains exactly why dumping everything into a model's context is a costly mistake.