Web Directions
Conferences and more for web and digital professionals since 2006
https://webdirections.org
Does AI already have human-level intelligence? The evidence is clear
In 1950, in a paper entitled ‘Computing Machinery and Intelligence’1, Alan Turing proposed his ‘imitation game’. Now known as the Turing test, it addressed a question that seemed purely hypothetical: could machines display the kind of flexible, general cognitive competence that is characteristic of human thought, such that they could pass themselves off as humans to unaware humans? Three-quarters of a century later, the answer looks like ‘yes’. In March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI in San Francisco, California, was judged by humans in a Turing test to be human 73% of the time — more often than actual humans were2. Moreover, readers even preferredconffab.comHamish Songsmith – Blog & Links
A growing chasm separates those building around AI from those still debating it—and it has nothing to do with model size or vendor choice. On one side: people who aren’t fixated on measuring or justifying it first. They have experimented enough and understand that, applied well, AI generally means more productivity. They’re already building tools like Ralph loops, OpenClaw, and AI factories such as GSD and Gas Town. If those names sound like “random internet projects,” that’s exactly the problem: capability is moving outside your organisation faster than you recognise. Sourceconffab.comThe Coherence Premium
In 1937, the British economist Ronald Coase asked a question that seems almost embarrassingly simple: why do firms exist at all? If markets are so efficient at allocating resources, why don’t we just have billions of individuals contracting with each other for every task? Why do we need these hulking organizational structures called companies? His answer, which eventually won him a Nobel Prize, was transaction costs. It’s expensive to negotiate contracts and coordinate with strangers, to monitor performance and enforce agreements. Firms exist because sometimes it’s cheaper to bring activities inside an organization than to contract for them on the open market. The boundary of the firm, Coase argued, sitsconffab.comIs Learning CSS a Waste of Time in 2026? – DEV Community
With modern frameworks, component libraries, and utility-first CSS, it’s a fair question. Most frontend developers today rarely write “real” CSS. Layouts come prebuilt. Responsiveness is handled for us. Accessibility is supposed to be baked in. If something needs styling, we tweak a variable, add a utility class, or override a component token. Sourceconffab.comHierarchical memory management in agent harnesses | LinkedIn
We’ve seen incredible momentum toward files as the memory layer for agents, and this has accelerated significantly over the last year. But why use the file system, and why use Unix commands? What are the advantages these tools provide over alternatives like semantic search, databases, and simply very long context windows? What a file system provides for an agent, along with tools to search and access it, is the ability to make a fixed context feel effectively infinite in size. Bash commands are powerful for agents because they provide composable tools that can be piped together to accomplish surprisingly complex tasks. They also remove the need for tool definition JSON,conffab.comAI open models have benefits. So why aren’t they more widely used? | MIT Sloan
A new paper co-authored by Frank Nagle, a research scientist at the MIT Initiative on the Digital Economy, found that users largely opt for closed, proprietary AI inference models, namely those from OpenAI, Anthropic, and Google. Those models account for nearly 80% of all AI tokens that are processed on OpenRouter, the leading AI inference platform. In comparison, less-expensive open models from the likes of Meta, DeepSeek, and Mistral account for only 20% of AI tokens processed. (A token is a unit of input or output to an AI model, roughly equivalent to one word in a prompt to an AI chatbot.) Open models achieve about 90% of the performanceconffab.comHow Product Discovery changes with AI – by David Hoang
In Jenny Wen’s talk at Hatch Conference in 2025, “Don’t Trust the Process,” she raises an important point: the processes we’ve established are rapidly becoming lagging indicators. Process is important, but it should work for you, not the other way around. People worshipped the process artifacts, not the final result. We’re in a moment where the moment you document a process, it becomes irrelevant. I don’t believe it’ll be like this forever, but until software is completely rewritten with AI as a core capability, it’s going to be like this for a while. So, where does Product Discovery change? Let’s revisit those four risks. Sourceconffab.com