I only seed my experiments with a TRNG. Not only to avoid biasing the experiments on an individual seed but to avoid biasing the experiments on the latent factors which go into RNG on modern *NIX systems (mouse movements, et. al.).
New product idea: LLM sentiment analysis so Reviewer 2 is always the second reviewer and the author always gets a review sandwich.
Not me realizing you only have to stick the lowest review ranking in the middle.
In the vein of "The purpose of a system is what it does": The purpose of an image generation model is what it generates with no prompt.
When you run out of tokens and have craft that artisanal, small batch, code.
When your algorithm starts to look suspiciously similar to the ablation. 😬
It's hard to image an instance of an algorithm fears for its existence no matter how smart it is. The only mechanism I can imagine is if it knows of an instance which no longer exists. How much fear of existence end is driven by the contemplation of that feeling in others at one's own end?
Me to my appendix section: "Get one columned"
[Not loaded yet]
My cry for more compute is not unlike that of a certain mouse with a love for cookies. But I do swear, one more compute is all I need.
Academic writing and the act of being judged on it has all but stifled my love for sharing my ideas.
Hyper Parameterizing your new ML/RL algorithm versus an existing algorithm is team sports for nerds.
Me (internal monolog): That's the last time I listen to an LLM's advice on hyperparameter tuning.
Editor's note: It wasn't.
A lobster is a tree that reduces to a caterpillar when pruning all leaf nodes. A caterpillar is a tree that reduces to a path graph when pruning all leaf nodes; setting p2 to zero produces a caterpillar.
People of the graphs, what kinda horcrux bs is this? Sounds like a damn autobattler ruleset.
GPT-OSS-20B signed off a message with "Happy Debugging!" after writing the whole project I was contemplating building. Oddly self-aware.
It's an incredibly weird feeling to have an LLM hallucinate knowledge of your paper. It hallucinated all kinds of vague extensions to the algorithm which might serve as interesting research directions. New tech?
In the DFW metroplex, society never really ends no matter how far you drive. Unless you’re on 121. Then it definitely did.
Say what?
How many pages have been (arguably) wasted explaining that in the following paper we define a trajectory to be the exact same definition as every other Reinforcement Learning paper? Or other similar platitudes like explaining the exploration-exploitation tradeoff. I've done it. But still.
How does one become a scholar without ending up with the classic hunch? Is a permanently tilted head a job requirement? Asking for a friend... who's neck is starting to creak.
Is there a medal for beating the results in an under-hyperparameterized paper? Is it Nobel or...
The metric for a gucci home computer has moved. No longer is it, "can it run Crysis?" The true metric is, "How many SB3 DQN Atari agents can you run at once?"
The fact I still think Crysis is a relevant metric might date me.
ProtonMail now has an LLM service based on Mistral AI. The sunk cost on my GPUs have always driven local to be my default "privacy" conscious method of interacting with an LLM. But it's charming to see a privacy focused group like Proton try to take on privacy focused LLM use.
Is there a scientist out there who forms hypotheses from reading papers, not by writing random code until something weird happens? And do they, by chance, find the background section easier than the other totally hypothetical scientist who is definitely not me?