Posts about Science

I am a physicist, so naturally I also have things to share in this area. Here you can find articles about physics, but also about mathematics and statistics. Sometimes I also look at financial matters, these sometimes end up in this category.


CO₂ Footprint of my PhD Thesis

As part of my Master and PhD theses I have used a lot of computer time on supercomputers in Jülich, Stuttgart, Bologna and the cluster in Bonn. I want to estimate the magnitude of CO₂ that this has released.

It is a bit hard to say how many core hours I have used exactly as I have already used data that existed already. Let's take like 5 Mh to pick a number. Then on JUWELS with the dual Intel Xeon Platinum 8168 with 48 cores that is around 100 kh. Each of the CPUs has a TDP of 205 W. Then there is network, file system, backup. Perhaps 750 W per node? And then there is cooling, which roughly takes the same on top, so 1.5 kW per node. That makes 150 MWh of electricity used. In Germany it seems that we would have to take 0.4 kg/kWh of CO₂. This would then give a little over 60 t of CO₂.

Read more…

Clusting Recorded Routes

I record a bunch of my activities with Strava. And there are novel routes that I try out and only have done once. The other part are routes that I do more than once. The thing that I am missing on Strava is a comparison of similar routes. It has segments, but I would have to make my whole commute one segment in order to see how I fare on it.

So what I would like to try here is to use a clustering algorithm to automatically identify clusters of similar rides. And also I would like find rides that have the same start and end point, but different routes in between. In my machine learning book I read that there are clustering algorithms, so this is the project that I would like to apply them to.

Incidentally Strava features a lot of apps, so I had a look but could not find what I was looking for. Instead I want to program this myself in Python. One can export the data from Strava and obtains a ZIP file with all the GPX files corresponding to my activities.

Read more…

Are Clothespins Worth Using?

I've been using clothespins all along. I know other people who do as well, and some who never use them. While discussing this over dinner, it seems there are two stances that people take:

  1. Pins are not worth using at all. The clothing dries as fast as it does without them, perhaps insignificantly slower. The time needed to work with the pins does not make up for the benefit of having the laundry done faster.

  2. Pins clearly must do a difference as the clothing is just in two and not four layers.

Well, I am clearly in the second team. But this is a hypotheses that one can test and negate. So apply the scientific method! As a setup I took four pieces of underwear and two t-shirts. Then I put half of them on the dryer with pins, the other just folded in half. Every now and then I measured their weight with a kitchen scale.

Read more…

Mehrwertsteuersenkung und Veränderter Grundwert

Bei ALDI gibt es wegen der Mehrwertsteuersenkung aktuell 3 % auf alles. Mediamarkt hatte manchmal auch so Aktionen, bei denen es 19 % Rabatt unter dem Motto »Mediamarkt schenkt die Mehrwertsteuer« gibt. Interessant ist ja eigentlich, dass bei den Rabatten die Preise sogar noch weiter gesenkt werden als nötig.

Sei der Nettopreis $N$, dann ist der Bruttopreis $B$ bei einer Mehrwertsteuer $m$ gegeben durch $B = N \cdot (1 + m)$. Im Normalfall ist $m = 0.19$ und daher haben wir $B = 1.19 \cdot N$. Möchte man die Mehrwertsteuer erlassen, so muss man den einen Rabatt geben, der $1/1.19 \approx 0.8403361$ entspricht. Das ist aber ein Rabatt von $1 - 0.8403361 \approx 0.1596639$, also knapp unter 16 %. Würde Mediamarkt den Kunden aber nur 16 % Rabatt geben, wären wahrscheinlich viele empört. Also gibt es noch weitere 3 % Rabatt für alle, die in Prozentrechnung nicht so fit sind.

Read more…

Number Sequence Questions Tried with Deep Learning

As part of IQ tests there are these horrible number sequence tests. I hate them with a passion because they are mathematically ill-defined problems. A super simple one would be to take 1, 3, 5, 7, 9 and ask for the next number. One could find this very easy and say that this sequence are the odd numbers and therefore the next number should be 11. But searching at the The On-Line Encyclopedia of Integer Sequences (OEIS) for that exact sequence gives 521 different results! Here are the first ten of them:

Sequence Prediction
The odd numbers: $a(n) = 2n + 1$. 11
Binary palindromes: numbers whose binary expansion is palindromic. 15
Josephus problem: $a(2n) = 2a(n)-1, a(2n+1) = 2a(n)+1$. 11
Numerators in canonical bijection from positive integers to positive rationals ≤ 1 11
a(n) = largest base-2 palindrome m <= 2n+1 such that every base-2 digit of m is <= the corresponding digit of 2n+1; m is written in base 10. 9
Fractalization of (1 + floor(n/2)) 8 or larger
Self numbers or Colombian numbers (numbers that are not of the form m + sum of digits of m for any m) 20
Numbers that are palindromic in bases 2 and 10. 33
Numbers that contain odd digits only. 11
Number of n-th generation triangles in the tiling of the hyperbolic plane by triangles with angles 12

So there must be an additional hidden constrain in the problem statement. Somehow they want that the person finds the simplest sequence that explains the series and then use that to predict the next number. But nobody ever defined what “simple” means in this context. If one would have a formal definition of the allowed sequence patterns, then these problems would be solvable. As they stand, I deem these problems utterly pointless.

Since I am exploring machine learning with Keras, I wondered whether one could solve this class of problems using these techniques. First I would have to aquire a bunch of these sequence patterns, then generate a bunch of training data and eventually try to train different networks with them. Finally I'd evaluate how good it performs.

Read more…

Default Standard Deviation Estimators in Python NumPy and R

I recently noticed by accident that the default standard deviation implementations in R and NumPy (Python) do not give the same results. In R we have this:

> x <- 1:10
> x
 [1]  1  2  3  4  5  6  7  8  9 10
> sd(x)
[1] 3.02765

And in Python the following:

>>> import numpy as np
>>> x = np.arange(1, 11)
>>> x
array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10])
>>> np.std(x)
2.8722813232690143

So why does one get 3.02 and the other 2.87? The difference is that R uses the unbiased estimator whereas NumPy by default uses the biased estimator. See this Wikipedia article for the details.

Read more…

Card Trick Explained with Combinatorics

Don't ask me why mind works that way, but for some reason I reacalled a card trick that a neighbor kid showed me when I was little. At the time I found it impressive that such things can even work. And today I could not really recall how the trick worked from the performer, but how it appears to the audience.

The general idea is this: You have regular playing cards and take a selection of 20 unique ones. Then they get paired up and shown to the audience alone. Each audience member picks one such pair and remembers them without telling the performer. Then the performer blindly stacks all those pairs and puts down the cards in a seemingly weird pattern with four rows and five columns. Each audience member indicates the rows (or row) that their pair is located at. The performer then tells them which cards they have picked.

As the performer does not necessarily have seen the pairs beforehand, he does not know which card belongs to which other card. Knowing only the row or rows seems a bit too little information. But then the solution just hit me while I continued to look at the trees outside: There are 10 pairs. And there are 4 possibilities to chose a single row and 6 possibilities to choose two different rows. So one only needs to make sure that there is only one pair which has this particular combination.

So let us go through it from the performer's perspective. The actual printing on the cards does not matter for us, we just need to know that the cards are paired up. I indicate this with the same fill color.

Read more…

Fit Range Determination with Machine Learning

One of the most tedious and error-prone things in my work in Lattice QCD is the manual choice of fit ranges. While reading up on Keras, deep neural networks and machine learning and how experimental the whole field is, I thought about just trying the fit range selection with deep learning.

We have correlation functions $C(t)$ which behave as $\sum_n A_n \exp(-E_n t)$ plus noise. The $E_n$ are the energies of the state $n$, the $A_n$ are the respective amplitudes. We are interested in extracting the smallest of the $E_n$, the ground state energy. We use that for sufficiently large times $t$ the term with the smallest energy dominates the expression. Without loss of generality we say $E_0 < E_1 < \ldots$ and formally write $$ \lim_{t \to \infty} C(t) = A_0 \exp(-E_0 t) \,. $$

By taking the effective mass as defined by $$ m_\text{eff}(t) = - \log\left(\frac{C(t)}{C(t+1)}\right) $$ we get $m_\text{eff}(t) \sim E_0$ in the region of large $t$. There are more subtleties involed (back-propagation, thermal states), which we will ignore here. The effective mass is expected to be constant in some region of the data where $t$ is sufficiently large such that the higher states have decayed; yet the exponentially decaying signal-to-noise-ratio is still sufficiently good. An example for such an effective mass is the following.

Read more…

Simple Captcha with Deep Neural Network

The other day I had to fill in a captcha on some website. Most sites today use Google's reCAPTCHA. It shows little image tiles and asks you to classify them. They use this to train a neutral network to classify situations for autonomous driving. Writing a program to solve this captcha would require obscene amounts of data to train a neutral network. And if that would already exist, autonomous cars would be here already.

The captcha on that website, however, was of the old and simple kind:

It is just six numbers (and always six numbers), the concentric circles and some pepper noise. These kind of captchas are outdated because one can solve them with machine learning. And as I am currently working through “Deep Learning with Python” by François Chollet and was looking for a practise project, this captcha came as inspiration at just the right moment.

Read more…

Physics in Star Trek: Enterprise

I've always enjoyed the science fiction genre, and there are many books shows available. Especially I like works where the physics are credible. The Enceladus series by Brandon Q. Morris is such a work. Also The Expanse show seems pretty great in that regard.

Recently I have watched Star Trek: Enterprise and loved the plots, the characters and their development, the recurring arch enemies and the general uplifting spirit. But from the physics side I needed to chuckle quite often. Some people just take the science to be fictitious and don't bother; but I prefer credible science fiction and complain a lot.

First off: Why does always something explode on the bridge when they get hit? On a navy warship the bridge is exposed, that could happen as well. But they have a CIC which is a bunker inside the ship. Enterprise does not have a window on the bridge, so why is it located at the edge of the hull? In The Expanse, the MCRN Donnager seems to have a combined bridge and CIC well protected in the ship. In the fight nothing explodes in the CIC. And even the railgun hit is unspectacular, as it should.

Read more…