Machine Learning and Quantum Mechanics
Machine learning and quantum mechanics have nothing in common physically. However, they are based on very similar building blocks from mathematics.
Machine learning and quantum mechanics have nothing in common physically. However, they are based on very similar building blocks from mathematics.
Es gibt schon viele Artikel über GPT und ChatGPT im besonderen; nur noch keinen von mir. Mein persönlicher Blickwinkel auf Large Language Models (LLM) und Generative Pretrained Transformers (GPT)
Zur Bundestagswahl 2021 hatte ich die Antworten der Parteien zu den 38 Fragen des Wahl-O-Mats analysiert, siehe den Blogeintrag. Das ganze habe ich jetzt mit den Antworten der Parteien zur Landtagswahl in NRW 2022 wiederholt. Dieser Eintrag ist daher kürzer und zeigt nur noch die Ergebnise, erklärt aber nicht mehr alle Details, die im alten Artikel stehen.
I have been playing around with games and tree search algorithms lately. One algorithm that I wanted to try out is the Monte Carlo tree search, MCTS for short. It tries to find certain leaves of the tree which are successful in some sense. In a recent post I have started my own tree search library and set the foundation to try more games. In this post I want to show a bit of Tic-Tac-Toe and MCTS.
Als ich mal den Wahl-o-Mat ausprobiert hatte und mir danach die Stellungnahmen der einzelnen Parteien angeschaut habe, so wirkten einige Parteien ziemlich redundant. So fragte ich mich, ob man anhand der Fragen wirklich alle Parteien ausdifferenzieren kann, oder ob die teilweise fast das gleiche geantwortet hatten. Also habe ich die Positionen der Parteien vom Wahl-o-Mat genommen und mit $-1$, $0$ und $+1$ kodiert als Tabelle gespeichert.
Using an audio recording of the cars passing by my window, I try to extract the number of cars that have passed by. I introduce the waveform and spectrum, construct a filter bank, find peaks. In a second part I play around with dimensionality reduction algorithms.
I have tried out reinforcement learning with the frozen lake example. In this post I will introduce the concept of Q-learning, the TensorFlow Agents library, the Frozen Lake game environment and how I put it all together.
My university background is physics, which is an empirical science. All measurements or derived quantities must be quoted with an error estimate. This should ideally include both a statistical and a systematic error. If you take a look at the paper from my thesis, you will find this table:
I record a bunch of my activities with Strava. And there are novel routes that I try out and only have done once. The other part are routes that I do more than once. The thing that I am missing on Strava is a comparison of similar routes. It has segments, but I would have to make my whole commute one segment in order to see how I fare on it.
As part of IQ tests there are these horrible number sequence tests. I hate them with a passion because they are mathematically ill-defined problems. A super simple one would be to take 1, 3, 5, 7, 9 and ask for the next number. One could find this very easy and say that this sequence are the odd numbers and therefore the next number should be 11. But searching at the The On-Line Encyclopedia of Integer Sequences (OEIS) for that exact sequence gives 521 different results! Here are the first ten of them: