Michael Kopp
Believer in memory to memory connections who wonders what "memory" and "compute" really mean.
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- On the flip side, when this current "data center" (more like large GPU clusters turned supercomputer) bubble bursts, they will hold the bum side of that trade and electricity might become very cheap ...
- ... totgesagte leben lange ...
- ... a less rose-tinted viewpoint of the same set of facts is that a vendor whose valuation is sky high due to solid demand is now funding its own demand pipeline. And that by funding one client over others. Red flags anyone?
- Maybe they are confusing antipasto in "an Italian meal is nothing without antipasto" with being anti pasta ...
- Reposted by Michael Kopp[Not loaded yet]
- Great to see xLSTMs excelling at human action segmentation. arxiv.org/abs/2506.09650
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael KoppWe are soooo proud. Our European-developed TiRex is leading the field—significantly ahead of U.S. competitors like Amazon, Datadog, Salesforce, and Google, as well as Chinese models from companies such as Alibaba.
- Ever though you could get a 35 million parameter time series state-of-the art foundation model that you can run on embedded hardware? Thanks to @hochreitersepp.bsky.social and his team at NXAI, you can. Amazing work! Paper: arxiv.org/abs/2505.23719 Code: github.com/NX-AI/tirex
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael KoppTiRex 🦖 time series xLSTM model ranked #1 on all leaderboards. ➡️ Outperforms models by Amazon, Google, Datadog, Salesforce, Alibaba ➡️ industrial applications ➡️ limited data ➡️ embedded AI and edge devices ➡️ Europe is leading Code: lnkd.in/eHXb-XwZ Paper: lnkd.in/e8e7xnri shorturl.at/jcQeq
- Ever though you could get a 35 million parameter time series state-of-the art foundation model that you can run on embedded hardware? Thanks to @hochreitersepp.bsky.social and his team at NXAI, you can. Amazing work! Paper: arxiv.org/abs/2505.23719 Code: github.com/NX-AI/tirex
- If you have to say it ... it is an issue. Reminds me of football club presidents coming out and saying their manager is safe. Usually that means said manager is gone not long after. If a risk does not exist it need not be addressed.
- Nice to see xLSTM slightly edging the transformer in classifying assembly tasks. arxiv.org/abs/2505.18012
- Nice to see xLSTM applied and succeeding in speech enhancement. arxiv.org/abs/2501.06146
- Surprising it should not be ...
- Not sure "come train our proprietary AI with your data so we can sell it back to you" is such a catchy slogan ...
- Some warped memory that. Is the US VP ok? Maybe he needs a break. Also, I remember the term "freedom fries" which replaced French fries which were never French to start ... anyway. youtu.be/CpuN-yM1sZU?...
- Thank you @digthatdata.bsky.social . Will mull over.
- Interesting observations. arxiv.org/abs/2504.07965
- I hope they thought through Northern Ireland and its hybrid status of being in the Single Market but part of the UK.
- Great read! arxiv.org/abs/2401.17173
- Trade balance on GOODS only it seems. If the incoming reciprocal actions to these reciprocal tariffs include services into a similarly Kafkaesque formula, that would be bad news for Silicon Valley.
- In summary, 1. , 4. and 5. made me into a functional analyst, 2. and 3. were key to my research in mathematics, 6. and the entire circle of ideas about memory that arose from this inspire me even today. 7. literally helped me navigate 2008/9.
- Very topical again today, I feel. Helps in understanding which theories are necessarily peddled by "flat earthers" and which might have a point. I actually met Wynne, sadly long before I was at all interested in economics. A very decent gentleman, indeed.
- 7. The only economics books that I could actually use to a) make sense of the data I was seeing and hence b) make money from trading: Wynne Godley's "Macroeconomics" and (together with Marc Lavoie) "Monetary Economics". In AI this would be called "Sector Accounting is All You Need".
- 6. The original LSTM papers (1994 and 1997 version). I might be biased but still very topical and, imho, under explored.
- 5. Tim Gowers' paper on arithmetic progressions of length 4. Kick started an entire circle of ideas (still going) that has revolutionized what we know about primes, upper bounds of how random things can be and techniques to exploit the latter.
- 4. A way of rigorously developing complex analysis (incl. it's topological aspects) via Fredholm operators on Hilbert Spaces, winding numbers as indices of the Cauchy-Riemann operators etc. Not how one should teach it - but an amazing demonstration of the synthesis of 19th and 20th century geometry.
- Best reference is probably Atiyah's book on K theory (recommended anyway) and his reference of his paper with Bott on the subject.
- 4. A way of rigorously developing complex analysis (incl. it's topological aspects) via Fredholm operators on Hilbert Spaces, winding numbers as indices of the Cauchy-Riemann operators etc. Not how one should teach it - but an amazing demonstration of the synthesis of 19th and 20th century geometry.
- Contained key ideas of how to tackle automatic continuity questions which ultimately were extended to throw up key connections to even having to involve which axioms we are using (see 2.).
- 3. Graham Allan's paper on "Embedding the algebra of formal power series in a Banach algebra" and the resulting notion of an element of finite closed descent (i.e. where the CLOSURE of principal ideals generated by its powers stabilizes).
- 2. Dana Scott's lecture notes on Boolean Valued Forcing in set theory. These are, sadly, unpublished and, I think, Scott's interpretation of the highly cited but also unpublished Solovay-Scott paper on the same topic. For anyone interested in where maths crosses over into philosophy, a must read.
- Taught me that nothing really is "elementary" (Riesz's lemma in this case) in functional analysis or any second order logic system (here complete normed spaces that are infinite dimensional) and that the search for reduction to "atomic" basic notions (here density via Riesz's lemma) is really key.
- Great thread @mariokrenn.bsky.social and apologies for my late answer. Here is my own necessarily very biased list. 1. Tom Ransford's ingeniously simple proof of the Bishop-Stone-Weierstrass theorem. (Math. Proc. Camb. Phil. Soc. 96, pp. 309--311).
- Great thread @mariokrenn.bsky.social and apologies for my late answer. Here is my own necessarily very biased list. 1. Tom Ransford's ingeniously simple proof of the Bishop-Stone-Weierstrass theorem. (Math. Proc. Camb. Phil. Soc. 96, pp. 309--311).
- Sounds odd, but this is part of what makes this place special. Like the "mallard" in Trinity's dining hall ...
- Seeing that X (=Twitter) was just acquired by xAI, I wonder why people want to keep their accounts and supply training data for free?One lesson learnt: we should all keep ownership and custody of our own data ...
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael KoppOften LLMs hallucinate because of semantic uncertainty due to missing factual training data. We propose a method to detect such uncertainties using only one generated output sequence. Super efficient method to detect hallucination in LLMs.
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Amazing work and great read! arxiv.org/abs/2402.14009
- Amazing to see what xLSTMs perform well at modelling biological and chemical sequences. Great read! arxiv.org/abs/2411.041...
- xLSTM can rival decision transformers and both scale linearly at inference time and extrapolate better. This could even be implemented as an embedded system. Very nice work! arxiv.org/abs/2410.22391
- Reposted by Michael Kopp[Not loaded yet]
- Reposted by Michael Kopp[Not loaded yet]
- Turns out the extrapolation capability of xLSTMs can help enhance feature extraction for change detection in remote sensing. Essentially combines different memories. Thank you for all the hard work Pedram Ghamisi, Yuduo Wang and Weikang Yu! arxiv.org/abs/2410.10047
- Very nice read and highly recommended. Using contrastive learning and clustering via modern Hopfield Networks the state space for RL (or IRL) can be abstracted and reduced from data alone. x.com/gklambauer/s...
- xLSTMs used to detect autism spectrum disorder (ASD) early, by being seemingly able to isolate relevant spatio-temporal upper body and head movement features when toddlers interact with parents. arxiv.org/abs/2408.16924
- xLSTMs seem to do well on audio data. arxiv.org/abs/2408.16568
- Turns out simple xLSTMs do well at stock market predictions. Surprised so much I am not, but good to know. arxiv.org/abs/2408.12408
- Very interesting results. xLSTMs works as a vision backbone - especially for large images (as could be expected). arxiv.org/abs/2406.04303
- Fascinating read! Highly recommended. arxiv.org/abs/2405.08766
- ... Or the end of the decade of ultra low interest rate regime following 2008 last seen in the 1930s Great Depression has made alternative asset classes like high yield bonds more attractive again than tech investments. Maybe we are just back to normal ... #RealityHurts
- Interesting to see how far LSTM ideas can be pushed to make make them competitive with transformers and state space models. Proud of our paper and all my co-authors. arxiv.org/abs/2405.04517
- Reposted by Michael Kopp[Not loaded yet]