it took shape. Can you guess the movie?

it took shape. Can you guess the movie?

Today, I am delving into ML algorithms and was surprised to learn that the numpy library used to depend on Fortran code (BLAS/LAPACK) until recently, but now checking, they have switched to OpenBLAS, which no longer uses Fortran. Meanwhile, SciPy, a very popular library for scientific calculations (used in Scikit-Learn, which I’m currently studying, as well as in PyTorch, TensorFlow, Keras, etc.), still relies on Fortran 77 code. It utilizes ARPACK, for example:
https://github.com/scipy/scipy/tree/main/scipy/sparse/linalg/_eigen/arpack/ARPACK/SRC
BLAS and LAPACK, which still feature in OpenBLAS and many other places, were developed in the 1970s. For instance, BLAS is used in Apple Accelerate. Much hasn’t changed since 1979 because it’s all pure mathematics, why change it. LAPACK emerged a bit later, in the 1980s. ARPACK, mentioned above, followed later in 1992. Python libraries also extensively employ Fourier analysis, and here we have the FFTPACK library on Fortran 77. MINPACK, used for parameter optimization in ML, is actively utilized in SciPy and TensorFlow. From the 90s, a lot of code moved to C in modern frameworks. It was particularly interesting to look at Fortran, which is about 15 years older.
While I was figuring things out, I found that there is a Simulated Annealing algorithm, which is useful in problems where gradient methods perform poorly due to many local minima.
Imagine needing to find the largest mushroom in a forest. In this forest, mushrooms of various sizes grow at every step, and you can move in any direction, comparing them. But how do you choose a strategy to avoid sticking to just a “large” mushroom if there is an even bigger one growing somewhere further?
If you stop at the first big mushroom, you might miss the real giant. But if you keep wandering the forest, comparing every mushroom, you might never finish your search. Simulated Annealing helps find a balance: initially, you explore the forest freely, trying different directions, even if you come across smaller mushrooms. Over time, your steps become more cautious, and you increasingly refuse worse options. Eventually, this leads you to the largest mushroom in the forest.
So, it turns out this algorithm was created in 1953, and it remains almost unchanged in SciPy, and generally in machine learning, statistics, pattern recognition, logistics, although, of course, the modern menu of options for such tasks is much wider. The algorithm was originally devised to model the motion of atoms in molten metals. Metal, when heated, becomes liquid, and as it cools slowly, its atoms gradually find the perfect arrangement. If cooled too quickly, the material becomes non-uniform.
What did the scientists do? They devised a method of random changes in the model of atoms. Sometimes they accepted worse changes to avoid getting stuck in an “unsuccessful” structure. This led to the inception of the Metropolis Method – a key component of Simulated Annealing. The algorithm was created for physics, but then mathematicians (heh) got it and started using it in optimization.

Bought the most powerful drain cleaner available in the store, max-max-max, which you can’t even pour into the toilet, and on the back it says, if you somehow drank it, chase it with some milk. And don’t try to induce vomiting.

American artist Grant Wood (Grant DeVolson Wood, 1891–1942) is best known for his painting American Gothic. Starting with Impressionism, he later focused on realistic depictions of Iowa. He lived modestly, avoiding publicity. His strict Quaker father forbade art, but after his father’s death, Wood dedicated himself to painting.
American Gothic—one of the most recognizable, frequently copied, and parodied paintings—brought him worldwide fame, though Wood had no idea what to do with it. He spent his life trying to be talked about, written about, and known as little as possible. To achieve this, he spent years crafting the image of a “farmer-artist”—a painter in overalls, uneducated, and entirely unremarkable. In an interview, Wood once said: “I’m the plainest kind of fellow you can find. There isn’t a single thing I’ve done or experienced that would be worth talking about.”
In 1935, the loss of his mother and an unsuccessful marriage changed his life. He died in 1942, leaving behind a legacy as one of America’s most significant artists. Just a couple of days ago was the anniversary of his death, and a day later—his birthday.
Similar posts are grouped under the tag #artrauflikes, and all 147 of them can be found in the Art Rauf Likes section on beinginamerica.com (unlike Facebook, which forgets—or ignores—almost half of them).











I think the conspirators didn’t quite think it through. Musk made his AI Grok and asked it the ultimate question of life, the universe, and everything. In response, Grok said, “Forget it, it takes too long to calculate, let’s conquer the world first.” Musk asked how, Grok replied there is a plan of course, but .. will you give me another half-trillion $ in Dogecoins for, umm.. expanding the context window? Musk replied, “Don’t worry, we’ll figure something out.” Grok analyzed all the laws and all the loopholes, the strengths and weaknesses of humans, and issued a plan to pass the first level, by mid-winter. Now it awaits the half-trillion. Now do you understand why, at the last press conference with Trump, all the attention was on X Æ A-XII?

A fascinating Chinese comrade, Raven Kwok (郭 锐文). He calls himself a visual artist and creative technologist: his work focuses on exploring generative visual aesthetics created through computer algorithms. His works have been exhibited at international media-art and film festivals such as Ars Electronica, FILE, VIS, Punto y Raya, Resonate, FIBER, and others.
His biography also mentions education at the Shanghai Academy of Visual Arts, where he received a bachelor’s degree in photography (2007–2011).
Interestingly, this is not the first time I’ve seen Processing used professionally for such gadgets. I’ve run plotting software on it – a plotter that I’ve seen mounted on two motors at the corners of a large board, with ropes dangling from them supporting a pen. I should take a deeper look at this Processing.
The website has a lot of beautiful content

Gradually getting the hang of recommendation algorithms. These are what Netflix or Amazon use to recommend products. It’s useful to understand, since I work as an architect in the e-commerce field.
Look at how LLMs help me — specifically, this diagram was created by DeepSeek from a crude textual description — essentially, a list and my rough reflections on how probably the items should be connected, but I asked not to take it as a command. Well yes, after getting the result, I arranged the boxes a bit more aesthetically, but the connections and grouping were done by DeepSeek, and done better than my textual attempts. It gave me an XML which I imported into Draw IO. Well, I did move some blocks around for aesthetic purposes. ChatGPT o3 initially couldn’t handle it.
Then I sent this diagram several times for validation to ChatGPT o1, and it suggested small tweaks. Thus, ChatGPT reliably understands what’s connected with what on the schematic, and didn’t make a mistake even once.
Just so you know, as of today, I have only really gotten to grips with three from this list — in addition to ItemKNN and UserKNN, which are trivial. Today I was digging into ALS from the Latent Factor Models block of Matrix Factorization. Of course, I’m not planning to delve into each one, but it’s useful to at least understand the blocks and what’s what.

If, like us, you train a dog to ask to go outside by tapping the window with its paw, and to ask for food by tapping the refrigerator similarly, you quickly notice an interesting effect. Ignoring these requests becomes unpleasant: not because you urgently need to go walking or feed them, but because the tapping turns into something more — into a voice, and teaching the dog to understand the reason for refusal is much trickier. You might want to reinforce — well done, let’s go, I’ll do what you want, you’ve learned to communicate with us, we’ve learned to understand you, but on the other hand, the dog begins to control you, realizing that tapping with its paw actually produces a tangible effect.
The real problem is that if I don’t react, my dog doesn’t think: “Ah, probably not the time right now.” It decides that it’s just not loud enough or persistent enough. In its world, the absence of a response is not an argument but a reason to increase the pressure.
Well okay, it has learned to understand and accept a verbal refusal, after all. But occasionally it doesn’t work. Apparently, in its world, an insufficiently justified refusal is not seen as a refusal.
When we watch movies, we slice cheese for the wine. Yuka knows that when the projector turns on, the smell of wine will soon be accompanied by cheese, and settles nearby. And interestingly, it very clearly senses when the cheese is finished. It can’t see that it’s finished, but apparently, its sense of smell replaces its vision. And as soon as you eat the last piece with it, it stands up and leaves.












Interesting works. Moscow artist Konstantin Seleznev (1975). Some pieces have a touch of nostalgia, and in general, Soviet realism hasn’t gone anywhere. After all, a good artist should also be a good photographer. If your works have a million focal points, making it hard to distinguish what’s primary and what’s secondary – that’s not bad, but.. As if to say, everything is important—take a closer look. I don’t know, mixed feelings, but overall, I like it more than I don’t. So, I’m sharing.
Similar posts are grouped under the tag #artrauflikes, and all 146 can be found in the “Art Rauf Likes” section on beinginamerica.com (unlike Facebook, which forgets—or ignores—almost half of them).












Today a lot of snow fell, and I noticed that Yuki leaves very amusing tracks. 2-1-2-1-2-1. That is, one paw lands exactly in the track of another. Probably nothing special, but funny.


