Surprising Facts About Nature and Science | February 10 2025, 22:11

Live a century, learn a century.

Strawberries and wild strawberries are not berries, but nuts. More precisely, not the fruits themselves but the seeds, and the pulp is the receptacle. Potatoes are bi-locular berries. A pear is an apple. Cherries, plums, apricots, peaches are all drupes. They are divided into one-seeded (e.g., cherry, plum, peach, coconut) and many-seeded (e.g., raspberry, blackberry, cloudberry). Bananas are berries. Pineapple is a grass. Watermelon is a berry (a type of pumpkin). Almonds are not nuts, but a dry fruit. Apple seeds, and the pits of cherries, apricots, peaches, or plums contain cyanides (amygdalin converts in them). Just like in almonds. Chocolate contains theobromine – a couple of bars might be lethal for a dog or close to it, half a bar will definitely knock it down. Vanilla is made from a Mexican orchid vine, while vanillin, an artificial vanilla substitute, is a byproduct of the pulp and paper industry.

There is no such animal as a panther. In popular usage, “panthers” are black jaguars or leopards. Black panthers also have spots, they’re just less visible. Polar bears have black skin and transparent fur. And they are white for the same reason clouds are white. Woodpeckers have tongues four times the length of their beaks, wrapped around their skulls that can stretch out. The tongue of the European green woodpecker goes down into the throat, stretches across the back of the neck, around the back of the skull under the skin, across the crown between the eyes, and usually ends right under the eye socket. In some woodpeckers, the tongue exits the skull between the eyes and enters the beak through one of the nostrils.

Anteaters have their tongues attached to their sternums, between the clavicles. Elephants are the only animals with four fully-developed knee joints. Koalas have fingerprints that are almost indistinguishable from human ones. Sharks have no bones and their closest relatives are rays. Crocodiles can go without eating for a whole year (but they feel blue). Zebras are black with white stripes, not the other way round (white appears on black skin). 1% of people have cervical ribs. Squids, cuttlefish, and octopuses can edit their RNA “on the fly”.

As it turns out, René Descartes invented the Cartesian coordinate system for Russia and for the rest of the world. Since Descartes’s name is Descartes, i.e., Des Cartes, it corresponds to Cartesian.

Bridging Brain Functions and Language Models through Predictive Processing | February 09 2025, 21:39

Here is the requested translation with your style and HTML markup preserved:

I’ve been thinking that understanding how large language models (LLM; like ChatGPT) function explains how our (at least my) brain probably works, and vice versa—observing how the brain functions can lead to a better understanding of how to train LLMs.

You know, LLMs are based on a simple logic—choosing the appropriate next word after N known ones, forming a “context”. For this, LLMs are trained on a gigantic corpus of texts, to demonstrate what words typically follow others in various contexts.

So, when you study any language, like English, this stage is inevitable. You need to encounter a stream of words in any form—written or spoken—so that your brain can discover and assimilate patterns simply through observation or listening (and better yet, both—multimodality).

In LLMs, the basic units are not words, but tokens—words and often parts of words. After processing this vast corpus of texts, it turned out to be straightforward to find simply the most common sequences, which of course turned out to be somewhere full words, and sometimes parts of words. So, when you start to speak a foreign language, especially with a system of endings, you begin to pronounce the beginning of a word, and your brain at that moment boils over the “calculation” of the ending.

When we read text or listen, we actually don’t analyze words letter by letter, because very often important pieces just disappear due to fast or unclear speech, typos. But the brain doesn’t need to sift through all the words that look or sound like the given one, it needs to understand whether what is heard or seen matches a very limited set of words that could logically follow the previous one.

It’s a separate story with whole phrases. In our brain, they form a single “token”. That is, they are not broken down into separate words, unless you specifically think about it. And such tokens also appear in the stream not accidentally—the brain expects them, and as soon as it hears or sees signs that the phrase has appeared, the circle of options narrows down to literally 1-2 possible phrases with such a beginning, and that’s it—one of them is what was said or written.

But the most interesting thing is that recent research has shown: the human brain really works very similar to LLMs. In the study “The neural architecture of language: Integrative modeling converges on predictive processing”, MIT scientists showed that models that better predict the next word also more accurately model brain activity during language processing. Thus, the mechanism used in modern neural networks is not just inspired by cognitive processes, but actually reflects them.

During the experiment, fMRI and electrocorticography (ECoG) data were analyzed during language perception. The researchers found that the best predictive model at the time (GPT-2 XL) could explain almost 100% of the explainable variation in neural responses. This means that the process of understanding language in humans is really built on predictive processing, not on sequential analysis of words and grammatical structures. Moreover, the task of predicting the next word turned out to be key—models trained on other language tasks (for example, grammatical parsing) were worse at predicting brain activity.

If this is true, then the key to fluent reading and speaking in a foreign language is precisely training predictive processing. The more the brain encounters a stream of natural language (both written and spoken), the better it can form expectations about the next word or phrase. This also explains why native speakers don’t notice grammatical errors or can’t always explain the rules—their brain isn’t analyzing individual elements, but predicting entire speech patterns.

So, if you want to speak freely, you don’t just need to learn the rules, but literally immerse your brain in the flow of language—listen, read, speak, so that the neural network in your head gets trained to predict words and structures just as GPT does.

Meanwhile, there’s the theory of predictive coding, asserting that unlike language models predicting only the nearest words, the human brain forms predictions at different levels and time scales. This was tested by other researchers (google Evidence of a predictive coding hierarchy in the human brain listening to speech).

Briefly, the brain works not only to predict the next word, but as if several processes of different “resolutions” are launched. The temporal cortex (lower level) predicts short-term and local elements (sounds, words). The frontal and parietal cortex (higher level) predicts long-term and global language structures. Semantic predictions (meaning of words and phrases) cover longer time intervals (≈8 words ahead). Syntactic predictions (grammatical structure) have a shorter time horizon (≈5 words ahead).

If you try to transfer this concept to the architecture of language models (LLM), you can improve their performance through a hierarchical predictive system. Currently, models like GPT operate with a fixed contextual window—they analyze a limited number of previous words and predict the next, not exceeding these boundaries. However, in the brain, predictions work at different levels: locally—at the level of words and sentences, and globally—at the level of entire semantic blocks.

One of the possible ways to improve LLMs is to add a mechanism that simultaneously works with different time horizons.

Interestingly, can you set up LLM so that some layers specialize in short language dependencies (e.g., adjacent words), and others—in longer structures (e.g., the semantic content of a paragraph)? I google it, and there’s something similar in the topic of “hierarchical transformers”, where layers interact with each other at different levels of abstraction, but still, it’s more for processing super-long documents.

As I understand it, the problem is that for such, you need to train fundamental models from scratch, and probably, this does not work well on unlabelled or poorly labelled content.

Another option is to use multitask learning, so that the model not only predicts the next word, but also tries to guess what the nearest sentence or even the whole paragraph will be about. Again, google search shows that this can be implemented, for example, through the division of attention heads in the transformer, where some parts of the model analyze short language dependencies, and others predict longer-term semantic connections. But as soon as I dive into this topic, my brain explodes. It’s all really complex.

But perhaps, if it’s possible to integrate such a multilevel prediction system into LLMs, they could better understand the context and generate more meaningful and consistent texts, getting closer to how the human brain works.

I’ll be at a conference on the subject in March; will need to talk with the scientists then.

Nuclear Legacy: Carbon-14 and the Science of Dating Life | February 09 2025, 14:35

It turns out that nuclear tests between 1955 and 1963 left their mark in every living organism on Earth, and scientists are able to use this fact to determine the age of cells in any living (at that time) creature on Earth and the frequency of their renewal, which would have been significantly more challenging without the nuclear tests. There is even a specific term “C-14 bomb-pulse dating”.

This is how radiocarbon analysis works. From 1955 to 1963, the use of atomic bombs doubled the amount of carbon-14 in the atmosphere. Atmospheric carbon-14, which is usually only produced by cosmic radiation, reacts with oxygen, forming carbon dioxide (¹⁴CO₂). This ¹⁴CO₂ is absorbed by plants during photosynthesis and then transferred into the human body directly through plant food and indirectly through the meat of animals, aligning its quantity roughly with the concentration in the atmosphere. Animals eat these plants, and we eat these animals—thus carbon-14 becomes incorporated into our bodies, integrating into our tissues.

Most tissues in living organisms gradually renew over weeks or months, so the carbon-14 content in them corresponds to the current atmospheric level. However, tissues that either do not renew or renew very slowly will contain a carbon-14 level close to that of the atmosphere at the time they were formed. Thus, by measuring the carbon-14 content in the tissues of people who lived during and after the peak of the “bomb pulse”, the rate of replacement of certain tissues or their components can be precisely estimated.

This means that nuclear tests, inadvertently, have provided scientists with a way to understand when tissues are formed, how long they last, and how rapidly they are replaced.

It turns out that practically every tree that has lived since 1954 contains a “spike” – a kind of souvenir from the atomic bombs. Wherever botanists look, they find this marker. There are studies in Thailand, studies in Mexico, studies in Brazil—wherever you measure the carbon-14 level, it’s there. All trees carry this “marker”—trees of northern latitudes, tropical trees, rainforest trees—it’s a worldwide phenomenon.

But there’s a catch. Every eleven years, the amount of carbon-14 in the atmosphere halves. Once the carbon-14 level returns to its original value, this method will become useless. Scientific American explains that “scientists have the opportunity to use this unique dating method only for a few decades until the carbon-14 level returns to normal.” This means that if they want to use this method, they need to hurry. Unless there are new nuclear explosions—but no one wants that.

Besides, this method enables the determination of a person’s age through their teeth and hair. Once a tooth is formed, the amount of carbon-14 in its enamel remains unchanged, making it an ideal tool for dating. Because certain teeth form at specific ages, measuring the 14C content in different teeth can help researchers estimate a range of birth years. The same holds true for hair, which grows about 1 cm per month, and conclusions can also be drawn from the carbon content in different parts of the hair.

About one-third of an entire tooth, or 100 milligrams, is needed for dating the carbon in teeth. To prepare the sample, it is ground and dissolved in acid, which releases CO2. With hair, instead of dissolving it in acid, it is burned. As hair has a high carbon content, only 3-4 milligrams of hair is needed. CO2 from the tooth or hair sample is then reduced to graphite—a crystalline form of carbon—and placed in an ion source at CAMS, where neutral graphite atoms are ionized by giving them a negative charge. The accelerator can then use this negative charge to speed up the sample, enabling detection, counting, and comparison of carbon isotope ratios. On graphs, pMC represents the ratio of concentrations.

In the 1960s, when the concentration of C-14 was sharply changing, the method allowed the determination of tissue age to an accuracy of ±1 year. However, after 2000, as the C-14 levels evened out, the accuracy dropped to ±2–4 years.

Unpacking Hidden Data Collection in Mobile Apps | February 08 2025, 16:20

I recently stumbled upon an intriguing study on the Timsh org website, where the author dissected how applications collect and transmit your data. The experiment employed an old iPhone device and intercepted traffic. A certain random application was installed on the phone for the experiment—it was Stack by KetchApp. The author intercepted the traffic and observed what was transmitted from the application to the outside world. A lot of data was transmitted, even when answering “no” to the question “Allow tracking?”.

Specifically, the IP address (which allows your location to be determined via reverse DNS), approximate geolocation (even with geolocation services disabled),

device model, battery charge level, screen brightness level, amount of free memory, and other parameters.

The data does not go to the company that created the application, but rather to various third parties. That is, these third parties collect data from most of the applications on your phone, and the data flows occur every time the application operates.

The author writes about two major groups of players – SSP and DSP.

SSP (Supply-Side Platforms) include those that collect data from the application—Unity Ads, IronSource, Adjust. There are also DSPs (Demand-Side Platforms), which manage advertising auctions, such as Moloco Ads, Criteo.

Advertisers gain access to the data through DSPs. Data brokers—aggregate and sell data. For example, Redmob, AGR Marketing Solutions. The latter sells databases that include PII, such as name, address, phone number, and even advertising identifiers (IDFA/MAID).

What data is sent? For instance, that Stack app from KetchApp sent to Unity Ads the geolocation (latitude, longitude), IP address (including server IPs, for example, Amazon AWS), unique device identifiers: IDFV (identifier for a specific developer) and IDFA (advertising identifier), as well as other additional parameters like the model of the phone, battery level, memory status, screen brightness, headphone connection, and even the exact system load time.

At DSPs, a RTB (real-time bidding) system exists for selling information. Data is transferred from the app via SSP (such as Unity Ads), and then to DSP (such as Moloco Ads), where auctions are held in real time to display relevant advertising. At each stage, data is transmitted to dozens, if not hundreds, of companies.

Yes, by answering “I do not want to share data,” you only deactivate the sending of IDFA (advertising identifier), but other data, such as IP address, User-Agent, and geolocation, and all these phone model and free memory, are still transmitted. Combined, they serve as a fingerprint at the moment, just like the advertising identifier. If desired, applications can still identify you by many parameters: IP address, device model, OS version, fonts, screen resolution, battery level, time zone, and other data, as they receive this information from hundreds of other places. Another question is that “end applications” do not need this, it is not free, but those who show you ads need this, and they have this info. And, of course, various special services can easily access it if necessary.

If you use several apps from one developer, the IDFV identifier allows linking data from all the apps.

Perhaps it’s not a secret at all, but almost every app sends data to Facebook (Meta) without asking for the user’s consent. That is, if you have Facebook on your phone, then bingo, any data from any other apps begin to be tagged with your profile, even if you have forbidden sharing information in those apps.

Companies exchange user data with each other. For instance, Facebook exchanges information with Amazon, Google, TikTok, and mobile SDKs (such as Appsflyer, Adjust) perform cross-linking of users between different services because such exchanges enhance the value and quality of information immediately for all participants.

Meanwhile, it turned out that Unity, which actually deals with 3D engines for games, primarily earns from selling these collected data. Specifically, in 2023, they had revenue from this direction amounting to $2 billion (“Mobile Game Ad Network”). In 2022, Unity absorbed IronSource — another giant of mobile advertising. IronSource deals with analyzing user behavior and optimizing monetization, as well as selling data to advertisers. Now, Unity through LevelPlay can manage not just ad placement but also data aggregation, selling them to other companies.

A significant portion of mobile games are created on Unity, especially free-to-play games. This allows Unity to have access to data from millions of devices globally, even without explicit user consent. Developers often do not realize how deeply Unity tracks data in their games.

Conclusion: disabling ads or prohibiting tracking at the OS level is just a minor obstacle. Data about you is still being collected, analyzed, and transmitted to hundreds of companies.

See the link below

Luck Over Talent: Decoding the True Drivers of Success | February 08 2025, 00:51

A lengthy post on how to achieve success! For free! No registration or SMS required! I just stumbled upon a scientific study proving that the role of chance in success is greater than that of talent. And this resonated with my belief that successful people are successful because they are lucky, not because they are extraordinarily talented, smart, or unusual. Rather on the contrary, they are so because they’ve been lucky. Note, not because they are “lucky ducks,” but because they’ve been lucky. These are different things.

Let me argue this. There’s a study “Talent vs Luck: the role of randomness in success and failure,” authors Alessandro Pluchino, Alessio Emanuele Biondo, and Andrea Rapisarda. Yes, the funny part is that Alessandro received the Ig Nobel Prize for this work (“a symbolic award for scientific discoveries that ‘first make people laugh, and then make them think'”). They used agent-based modeling to analyze the contributions of talent and luck to success.

As initial data, they took supposedly objective things: talent and intelligence are distributed among the population according to the normal (Gaussian) distribution, where most people have an average level of these qualities, and extreme values are rare, while wealth, often considered an indicator of success, follows the Pareto distribution (power law), where a small number of people own a significant portion of the resources, and the majority owns only a small share.

Further, the authors developed a simple model in which agents (1000) with varying levels of talent are exposed to random events over the hypothetical 40 years, which could be either favorable (luck) or unfavorable (misfortune). Each such event affects the “capital” of an agent, serving as a measure of his success.

Result: Though a certain level of talent is necessary to achieve success, it is often not the most talented individuals who become the most successful, but those with an average level of talent who experience more fortunate events. There is a strong correlation between the number of fortunate events and the level of success: the most successful agents are also the luckiest.

My observation of how the world works completely agrees with these conclusions. You just need to do things so that you’re more fortunate. That’s it. Don’t try to be the smartest—it doesn’t help as much as the following things do:

1) Being in environments where important events occur. Silicon Valley for startuppers. New York for financiers. Hollywood for actors. If an environment increases the chance of meeting “key” people, it makes sense to place oneself in that environment.

2) Creating more points of contact with the world and maintaining them. Running a blog, writing articles, giving interviews. Attending conferences, participating in communities. Calling and writing to acquaintances and semi-acquaintances, especially when such calls and letters are potentially important to them. Expanding the number of contacts—even if 99% are useless, 1% can change your life.

3) Increasing the number of attempts. The more projects, the higher the chance that one of them will “hit.” The best example – venture funds: they invest in dozens of startups, knowing that success will come from only one. Artists, writers, musicians create hundreds of works, knowing that only one will become a hit.

Unfortunately, for this point, you need to love your work. So choose a task where attempts are enjoyable.

Organizational psychologist Tomas Chamorro-Premuzic in his book “Why Do So Many Incompetent Men Become Leaders?” asserts that luck accounts for about 55% of success, including such factors as the place of birth and family wealth. This is true, but since you are sitting on Facebook on an iPhone with a cup of coffee and not herding cows in a loincloth in Africa, you already have pretty good initial conditions.

From here, an interesting conclusion — is it necessary to study at a university to achieve success in life? Look at the points above. Being in the right environment, creating more points of contact, increasing the number of attempts. Out of these three points, two work better in the case of face-to-face learning, while the third does not work well because the university consumes 4-5 years of life (and the university is one attempt). But the other two criteria are very important—during the period of study, the average student interacts with hundreds of peers, who can make a significant contribution to the likelihood of this student’s success.

But sitting at home with books for five years does not meet any criteria. Online education lies somewhere in between, see for yourself, it varies, but it’s closer to the option of “sitting with textbooks.”

The authors of the study confirmed the concept of “The Matthew Effect.” This is from the Bible: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.” (Matthew 25:29). They explain why success accumulates even if it is initially random:

People who are fortunate in the early stages receive more resources, opportunities, and attention. This, in turn, increases their chances for new fortunate events. As a result, those who were initially in a better position continue to build on their success, while the rest lag behind.

This explains why wealthy people often receive profitable investments, popular artists become even more popular, and less known ones remain in the shadows, and companies that “hit the stream” attract more customers and resources than their less fortunate competitors.

That’s why success also requires following the principle of “Fake it till you make it.” Successful people often exaggerate their skills or achievements, and then catch up to the proclaimed level. Society easily forgives and quickly forgets such things, but when they work (and they often do), the person no longer really needs them. There’s also a self-fulfilling prophecy—the idea that if a person states something as a fact (even if it’s an exaggeration), they and those around them start behaving as if it’s true, and eventually, it becomes reality.

There’s also the principle of “there’s no harm in asking” (It doesn’t hurt to ask). The principle is that if the likelihood of success is increased by asking someone a question (“can you raise my salary starting in March or put me in charge of that project”), then it’s worth asking. You never know unless you ask.

And one more thing. Act now, apologize later. Actions speak louder than words. As you know, being at the right time in the right place not only involves the right place (this is the first point from my list), but also the right time. Therefore, just do it. People who don’t dream but act never end up homeless on the street because they rushed.

And finally. Time is a finite resource. There was a good idea about the sheet with squares—google “90 years of life in weeks.” You can color the lived weeks and look at the remaining ones.

So, in summary.

Success is determined by luck, not talent. Talent helps, but is often formed under the influence of success. Knowledge is useful, but experience is more valuable. Time is a finite resource. Planning doesn’t work, three things do:

1) being in an environment where important events occur,

2) creating more points of contact with the world and maintaining them,

3) increasing the number of attempts where luck might work.

Three principles:

1) Fake it till you make it

2) It doesn’t hurt to ask

3) Actions speak louder than words

The Paradox of Software Complexity and AI’s Role in Legacy Systems | February 07 2025, 14:30

It is fascinating to observe how, with increasing complexity and over time, software transitions into a state of being “a thing in itself”, where even the developers do not fully understand how it works, or more precisely, why it sometimes suddenly malfunctions, and prefer to minimally interfere with it, leading them to understand it even less over time, and it solidifies into what it is for years. This process is known as software rot or legacy paralysis.

However, bosses and the market demand development, so instead of fundamentally changing and improving something, developers add “bells and whistles” which grow alongside, rather than changing the core product. It’s well understood that diving into the core product might set you on a path leading to disappointments, deadline failures, layoffs, etc.

Interestingly, with the advent of AI, this problem will only intensify on one hand because the team will understand even less about how things work, but on the other hand, complexity can be managed better because AI can analyze complex matters more easily than a single biological brain.

For instance, AI could be used to create tests for existing code, as well as to perform anomaly detection and potential bug hunting, for creating documentation and explaining the code structure from simple to complex, and it might partly automate refactoring and detect performance bottlenecks.

I believe such AI solutions for working with legacy will soon be a major market.

Exploring Frances Bell: A Modern Master of Detail and Color | February 06 2025, 20:59

(ENG) A truly cool English artist, Frances Bell. Her works are either portraits or scenes of people by water, yet all are executed with immense attention to detail without the actual details—directly following the principles of Sargent. Traditional techniques, simply people, alive, posing. Frances claims she works solely from life, no photographs involved. Observe how splendidly she conveys colors and shapes, “with a single stroke”.

Similar posts are grouped under the tag #artrauflikes, and at beinginamerica.com under the ‘Art Rauf Likes’ section, all 145 are available (unlike Facebook, which forgets (overlooks) almost half of them).

(ENG) A remarkable English artist, Frances Bell. Her canvases depict either portraits or people beside water, yet all are crafted with prodigious care for details—amazingly without actual details—a true adherence to Sargent’s tenets. She employs traditional techniques, capturing lively, posing individuals. Frances asserts that she operates purely from life, eschewing any use of photographs. Notice the masterful rendering of colors and forms, oftentimes achieved in “a single stroke”.

Posts of this kind are collected under the hashtag #artrauflikes, and at beinginamerica.com in the ‘Art Rauf Likes’ section, you can find all 145 of them, unlike Facebook, which neglects (disregards) nearly half.

(/ENG)

Navigating Life with ChatGPT: My AI Assistant Addiction | February 05 2025, 21:04

So, I’ve developed a bit of a ChatGPT addiction. It has overtaken Google and Facebook and is slowly creeping into all areas of life.

(Specifically, I use not only ChatGPT because for certain needs we have to use an analog developed by our engineers on our internal corporate network, so everything below is not only about ChatGPT, but about AI assistants in general. But for personal needs, it’s only ChatGPT for me.)

(1) Over the last six months, I’ve probably created a couple hundred Python scripts for data processing. I didn’t write any of the scripts myself (although I could; ask me again in a year or two, I might no longer be able to). To write a script for processing data, I just clearly state what I need, then closely examine the result, and if I like it, I run it. If it doesn’t work, and something needs tweaking, I tweak it myself. If it’s completely off, I ask for it to be redone. Most often, I end up with what I need. Example: read a CSV, create embeddings for all lines, cluster them, then write the results in separate files with the cluster number in the name. Or implement some complex data grouping.

I must mention bash commands separately. For example, I can’t recall how to sort lines from a file by length using command line and get the longest ones. Or I’m too lazy to remember detailed syntax for awk or jq to process something from the files through a pipe, it’s easier to ask ChatGPT.

(2) Lately, I frequently translate between Russian and English using LLMs. Rather than writing something in English myself, it’s easier to write it in Russian, get the translation, and then throw it into an email. It’s simply faster. It’s not even about the proficiency in English – of course, I could write it all myself. It’s about how much time is spent on phrasing. The argument “it’s twice as fast and clearer” beats all else. A downside—my English isn’t improving because of this.

(3) Generally, I run nearly 100% of the English texts I write through various LLMs, depending on the type of text. I ask them to correct the grammar, then copy-paste the result wherever I need—into an email or a Jira ticket. It seems I will soon have an anxiety that I sent something unreviewed. Because they always find something to correct, even if it’s just a minor thing like a missing article or a comma.

(4) When I’m too lazy to read large chunks of English text, I frequently throw them into ChatGPT and ask for a summary—sometimes in Russian. Can’t do this for work because the texts are often from clients, but if it’s really necessary, I also have access to a local LLM.

(5) I’m increasingly validating various design decisions (not visual design, but software design) through ChatGPT/LLM. I ask for criticism or additions. Often, the results make me think about what needs to be improved or what assumptions need to be added.

(6) I also use it for summarizing YouTube videos. Just download the subtitles in TXT format through Youtube subtitle downloader, throw them into an LLM, and then you can request summaries or ask questions based on them. It really helps to decide whether to watch the video or not.

What are your usage patterns?