David Huang
davidhuang.blog || big on AI, thinks AGI is here already || AI drug discovery, auctions, physicalism in theory and dualism in practice, religion as a solution to the folk theorem || general studies at YouTube || views my own, RS not endorsement
- Reposted by David Huang[Not loaded yet]
- This is where extra productivity goes… does she really need a team of psychiatrists and doctors… if we have more surplus do we need to add even more psychiatrists to her team?
- There’s only two people I’ve heard defend not enacting the Epstein transparency bill.. one hard right republican congressmen and the other a guy on the New Yorker podcast
- Fascinating to hear Emmett on A16Z podcast: In response to his Q: how would you know something is different other than its behavior (and generally what would make you grant AI personhood)? I would say… I would need to believe that I could have been in the AI shoes in a veil of ignorance way.
- Reposted by David Huang[Not loaded yet]
- I’d love to see what % of public co ownership stake is of OpenAI and Anthropic
- Idk the foreign policy and the ballroom seem aight, maybe just focus on the stuff like the president suing his own department of justice for hundreds of millions, or the pardons, or the forced national guard deployments across state lines
- it's painful to see people grasping onto "True AGI", "datacenter of geniuses", "Year of the Agent", "Massive job replacement" when it's just clearly not the case...
- definitely resonate with LLMs 1. giving over defensive code 2. unable to break out of common frameworks and apis
- Ok but isn’t this kinda expert systems all over again?
- Causation module; phase-shift gradient flow; something like that for rapid learning and high sample efficiency
- Sample inefficiency??
- Suppose your model of intelligence requires (1) NOT imitation learning (2) NOT virtual environments, and (3) NOT compressed timelines, then what BENEFIT is there to training an “artificial” intelligence vs an “organic” intelligence. Isn’t the organic more good?
- I think you learn a lot about a person when you learn what is the tiny garden they will protect from slop and where they will let slop go wild.
- Reposted by David Huang[Not loaded yet]
- Too much spam to take notes this way… maybe more subdomains is the answer
- He does know he is trying for the Nobel PEACE Prize right? Like put some effort into it. Just as easily could have been the Department of PEACE.
- Russell, Bertrand. The History of Western Philosophy
- I use ChatGPT with memory off and no longer than 3 back and forth messages
- Reposted by David Huang[Not loaded yet]
- I see this as a sort of "LLM as a judge" in the judicial family court and arbitration sense (arbitration broadly speaking)
- The problem with “reasoning” models is a limited conception of reasoning. Math/CS focus make it easy for it to understand internally consistent realms, and sensemake there. It also make it easier to have psychotic breaks. But harder to make sense of cloudy information.
- Like a toddler
- The perception is that being a young, married homeowner with kids is structurally difficult... but the truth is I think a lot of people make it tougher on themselves then they need to... I think a lot it boils down to (perceived) opportunity cost... both current and future
- I mean....
- Reposted by David Huang[Not loaded yet]
- And these vibes are self reinforcing…
- Reposted by David Huang[Not loaded yet]
- Reposted by David Huang[Not loaded yet]
- Owned by Murdoch too.. a reminder like Supreme Court votes that there is still room to go…
- If you ever wanted to see how knowledge and truth are completely unmoored from the context-free hellscape LLMs inhabit… try writing the rules for a prediction market.
- Why is the GOP trying to shut down the release of the Epstein files? Why is Trump trying to change the subject?
- You start seeing tokenization every where with a kiddo: “sunglasses” is a single token
- Reposted by David Huang[Not loaded yet]
- Reposted by David Huang[Not loaded yet]
- Reposted by David Huang[Not loaded yet]
- In the United States, staffers don’t brief the President, the President briefs them.
- It's incredible how much of "AI lessons" is around model-get-smarter and no-expert-systems etc. but even still everything blows up into system-prompted systems... e.g. Grok blow-up, Replit blow-up and so on.
- Reposted by David Huang[Not loaded yet]
- Reposted by David Huang[Not loaded yet]
- David Pfau - I'm starting to think that coding with LLMs is a bit like riding an electric bike - you don't really get to your destination any faster, but it can be much easier going up difficult hills (but you don't build up any strength in the process).
- Both takes — especially in light of LLM-psychosis are correctly. Obvious answer is that memory should be turned off! personalized LLM interactions are poisonous and anti-social. Memory is ok when shared across organization or individuals. LLM-therapy should be memory less.