What will be left of our present heritage after a few generations have passed?
“Before 300 BC, humanity produced 1,000 bytes of data per capita,” says Victor Zernov, scientific director of the Semiconductor Research Corporation, a scientific foresight organization funded by manufacturers and the US government. AD, the share per capita was equivalent to 100,000 bytes of information. Currently, the rate of data production for each individual has reached 10,000 billion bytes. According to these statistics, we will inevitably soon have a problem storing this large amount of information!”
In 2011, in the same vein, Martin Hilbert (University of Southern California) and Priscilla López (Open University of Catalonia) published their calculations for the planet’s production of data, which in 2010 came to about a zetabyte (Zo) of data has, which is equivalent to a thousand billion billion characters – each character is encoded on eight bits. They also predicted that between 300 and 700,000 Zo would be produced by 2040.
In light of the increasing problems faced by manufacturers of magnetic disks, and the inability of optical devices to compete in terms of density, attention was first focused on silicon flash memory.
Silicon is abundant, with silicon dioxide making up more than a quarter of the Earth’s crust. It is mainly extracted from sand before it is refined and formed into blocks with a purity of 99.9999%. The filtration process consumes a lot of electrical energy, equivalent to 2000 kWh per kilogram of pure silicon.
Also, not many mines contain pure silicon, which meets industrial requirements.
According to forward-looking scientific reports, the needs of silicon for the electronics industry are estimated at between 50 million and 100 billion tons for the year 2040, but what the industry can supply at that time is no more than 100,000 tons.
The expected shift in flash memory electron etching technology from 2D to 3D will reduce silicon needs to one-tenth the amount, but the energy required to supply this amount is equivalent to 400 times the current global electricity consumption. This is the reason for the search for another technology to reduce the data.
On the other hand, the team of physicist Peter Kazansky from the University of Southampton in England developed an innovative method to double the storage density from 100 to 1000 for a Blu-ray disc.
This is because the dense recording of data in a quartz crystal by a powerful laser locally affects the geometric property of light propagation. This allows data to be recorded in 5 dimensions, as the team was able to record 1 megabyte of data within an hour, and this number is twice as good as last year’s. The team is trying to achieve a recording rate of 100 megabytes per second.
As for Victor Zernov and many more manufacturers, they believe that a long-term solution may lie in the technology of storage on atomic particles, and this has been proposed since 1959 by the Nobel laureate in physics Richard Feiman. And his basic idea could become the key to the development of research in the field of data storage within DNA molecules.
DNA consists of a composition of 4 basic molecules: (adenine, cytosine, thymine, guanine). The Russian physicist Mikhail Neiman proposed using the four letters A, C, T and G to simulate the binary numbers 0 and 1 in today’s computers. And due to the lack of means to analyze (read) and synthesize (write) nucleic acids, the idea remained in the drawers until the first device for analyzing DNA sequencing was invented, and it was in Leroy Hood Laboratory in 1980. And seven years later, in the same place, the first device to synthesize the DNA strand was invented.
Since that time, research in this field has not stopped, as the first experiment to encode non-biological information within DNA was carried out in 1988. With the help of Harvard University, John Davis was able to encode a small 35-byte graphic into a code. part of the DNA of a living bacterium.
In 1994, University of California mathematician Leonard Adelman built the first biological computer in history, using DNA molecules to represent data for a simple calculation.
In 2010, Craig Venter’s group, from the American Institute of the same name, achieved the first direct translation into biomolecules of data content extracted from a database, where a complete bacterial genome was synthesized and then inserted into an organism is. The genome was synthesized from a sequence of 7,920 pieces of non-biological information, including the names of the researchers.
(continued next week)