3 Mathematical Laws To Learn In Data Science

3 Mathematical Laws To Learn In Data Science

In addition to Technology, Programming, and Data Analysis, knowing some Mathematical Laws is essential for the Data Scientist. See three here! A data scientist needs knowledge of programming and technology in their day-to-day work.

But mathematics in data science is also essential, and a professional in the field needs to know some concepts. Therefore, we will explain the three essential mathematical laws for a data scientist to know to work in the field. Follow along!

How Essential Is Mathematics For Data Science?

Data science is based on several mathematical standards and concepts. A data scientist will need statistical and probability theories knowledge to conduct the correct analysis and understand the data studied.

It can even be said that the success of a data scientist lies precisely in his ability to use mathematical knowledge in data analysis. Even though mathematics is not the only knowledge required to be a data scientist, it is undoubtedly one of the most important and necessary.

Discover The Primary Mathematical Laws In Data Science

Many mathematical laws are used in data science, especially those involving statistics and probability. However, we chose the three mathematical laws that every data scientist needs to know.

Benford’s Law

Benford’s law, or law of the first digit, as it is also known, is one of the proper mathematical laws in the work of a data scientist. The law provides for the probability of collections of numbers appearing.

The Canadian astronomer Simon Newcomb made the first ideas for the law in 1881. While reading a logarithm book, he noticed that the first pages were more worn than the others. The “coincidence” also occurred in other types of books.

However, the law became popular only after studies by American physicist Frank Benford in 1938, who revisited Simon’s theories. Precisely for this reason, the law received the surname Frank.

Benford analyzed 20 different contexts and found the exact “coincidence.” He discovered a pattern in numbers such as population size, mortality, and length of rivers, among others.

According to Benford’s discoveries, the law of the first digit was formulated. The law says that the chance of the first digit of a number being one is greater than the chance of it being two, and so on. 

The law even has a table exemplifying the frequency of the nine digits. According to the table, the digit 1 has a 30% chance of being the first digit of a number, while the chance of 2 is 17.6%. 

The percentage progressively decreases until the digit 9 has only a 4.6% chance of being the first digit. Benford’s law can be used to reveal accounting fraud, investigate electoral data, or calculate economic indicators.

Law Of Large Numbers

The law of large numbers (LGN) is the fundamental theorem of probability theories. The law describes that carrying out an experiment repeatedly makes you closer to reaching the expected result.

It was first proposed by the Italian mathematician Girolamo Cardano (1501 – 1567). However, he did not present convincing proof at the time. Only years later, the Swiss mathematician Jakob Bernoulli (1654-1705) proved the theory’s integrity.

As said at the beginning of the topic, according to the law of large numbers, when analyzing a variable X repeatedly, it is possible to get closer and closer to its expected value. It may seem slightly confusing at first, but things become more understandable with an example.

So, we can use the example of a coin being tossed. There are two possibilities, heads or tails, which means that both variables have an expected value of 50%. However, the plays do not always show this proportion at first.

The results of the moves may be repeated for a few attempts; for example, tails may appear in four out of five consecutive moves. Therefore, the proportion is far from the expected value, remaining 75% for crowns.

But, considering the law of large numbers, if we repeat it several times and take the average, it will get closer to the expected value. By data scientists, the law of large numbers can be used to carry out financial and demographic calculations and also in artificial intelligence.

Zipf’s Law

Another of the mathematical laws in data science is Zipf’s law. It is named after the American linguist George Kingsley Zipf. Although he did not claim to have created it, he was responsible for its popularization and explanation. Zipf’s law is a power law about the distribution of values ​​according to the order number in a list. 

In simple words, the law says that the second element of a list will be repeated approximately with a frequency corresponding to half of the first; in turn, the third will be repeated with a third of the frequency of the first, and so on.

Zipf conducted his studies analyzing the literary work “Ulysses” by James Joyce (chart above), counting and ordering the words in the book by frequency.

The results showed that the most common word appeared 8000 times, the tenth 800 times, and the thousandth just eight times. With Zipf’s law, it was possible to define the most common word in English, the article “the.”

But far beyond just demonstrating the most used words, the law can be used in data science, for example, to perform sentiment analysis on social networks text classification, and is also quite common in demographic indicators.

Conclusion

As you can see, mathematical laws in data science play a fundamental role and can be used in various data analysis. Among the various, we can mention Benford’s Laws, Large Numbers, and Zipf as the most important for the work of a data scientist. Knowing them is extremely important for these professionals. Just as knowing mathematical laws in data science is vital, it is also essential to know programming concepts.

Also Read: Virtual Assistants In Customer Service And Lead Generation