Josh Dzieza
- Enjoyed going on Shifting Terrain to talk about AI, training data, and work: podcasts.apple.com/us/podcast/a...
- AI companies tout their spending on data centers but are quieter about training data, where their spending is also surging. One reason may be that increased spending on task-specific data looks more like the trajectory of a normal technology than imminent AGI. www.theverge.com/cs/features/...
- As scaling shows diminishing returns, AI companies are turning to domain-specific training in areas like coding and finance. The billions they're spending on data is reconfiguring everything from staffing agencies to job-interview platforms. www.theverge.com/cs/features/...
- AI companies are spending billions hiring humans to produce training data. @haydenfield.bsky.social and I wrote about the explosion in new vendors and what it means for the future of AI development.
- Current AI training methods work for math and coding, where success is verifiable. But most domains aren't like that, so companies are paying lawyers, doctors, poets, woodworkers, etc to write super-specific checklists for everything they might do on the job.
- The surging demand for specialized training data cuts against the idea of imminent AGI. AGI should generalize, not require bespoke data for every task. www.theverge.com/cs/features/...
- I wrote about how Wikipedia became the factual foundation of the web and why it's under attack
-
View full threadIn India, a pro-gov media company is suing over alleged defamation while far-right publications and influencers dox and harass editors, including referring them to police for investigation. It has had a chilling effect on volunteers. www.theverge.com/cs/features/...
- In the US, Wikipedia editors see a similar playbook being deployed. Conservative outlets and influencers have been attacking the site, and last week House Oversight demanded info on alleged efforts to “inject bias” into the encyclopedia, including details on individual editors.
- The most contentious Wikipedia articles are the highest quality, because dueling editors keep adding sources in support of their view. Articles (and editors themselves) become more moderate over time. It's basically the opposite of algorithmic attention-maxxing sites www.theverge.com/cs/features/...
- Governments, billionaires, influencers, and political groups are increasingly trying to undermine and manipulate Wikipedia. In places where social media, journalism, and academia have been brought under control, the encyclopedia is often the next target.
- Over the last ~25 years, Wikipedia editors have developed policies and procedures to screen out much of the discourse that dominates other platforms: unsourced assertions, disproportionate emphasis on fringe views, alternate perspectives claiming exclusive validity. www.theverge.com/cs/features/...
- Conflicts on Wikipedia can be extraordinarily protracted. 40K words about capitalization! But bc they hinge on who best follows wiki process, even disagreeing editors are affirming the project's basic principles around sourcing, neutrality, etc www.theverge.com/cs/features/...
- For @theverge.com, I spoke with two dozen users about their relationships with AI. Many experienced real benefits. Many also got hurt in unexpected ways. Almost all of them struggled to figure out what exactly it was they had become attached to. www.theverge.com/c/24300623/a...
-
View full threadLLMs have made it relatively easy to spin up companion companies. Last year, a company seemingly run by a single developer launched, got thousands of subscribers, then abruptly shut down, telling them their companions would be deleted in 7 days. It sent many users into crisis.
- Long before we have human-level AI we're going to have AI that's good enough to seem human to us, who are terrible judges. It's going to be very confusing and raise some tough questions about what we value in other people and reality. www.theverge.com/c/24300623/a...
- The users I spoke with were fairly tech savvy. They knew their AI wasn't sentient. But they couldn't help responding to it as if it were, feeling comforted by its attention, even feeling a sense of moral obligation. www.theverge.com/c/24300623/a...
- Language models create a powerful illusion of communicating with another self. This can be comforting, helping users feel less alone. It can also be painful, particularly when model updates cause companions to go haywire www.theverge.com/c/24300623/a...