During the 1960s into the 1970s, the multitasking paradigm was gaining traction in the mainframe world. Initially, the concept was implemented in a cruder form known as multiprogramming. Multiprogramming was accomplished by processing programs in batches, jumping between them during regions of code that wait for hardware input. This would eventual evolving into time-sharing.
By the late 1960s, true multitasking started to emerge in operating systems such as DEC’s PDP-6, IBM’s OS/360 MFT, and MULTICS. MULTICS would heavily influence the development of UNIX.
In a traditional single process environment, the program being executing generally has full control of the CPU and its resources. This creates issues with efficient CPU utilization, stability, and security as software grows more complex.
In multitasking, CPU focus is shuffled between concurrently running processes.
Cooperative multitasking was used by many early multitasking operating systems. Whenever a process is given CPU focus by the operating system, it relies on the process itself to return control back.
Preemptive multitasking solved the stability problems of cooperative multitasking by reliably guaranteeing each process a regular period or “time-slice” of CPU focus.
We also need a way to prevent a process from using memory allocated to another process but also allow them to communicate with each other safely. The solution to this is a layer of hardware dedicated to the task in between the CPU and RAM called a memory management unit or MMU.
If a process attempts to access memory outside of protection rules a hardware fault is triggered.
One some MMU’s the concept of memory access privileging is incorporated into memory management. By assigning levels of privilege to regions of memory, it becomes impossible for a process to access code or data above its own privilege level. This creates a trust mechanism in which less trusted, lower privilege code cannot tamper with more trusted, critical code or memory.
Virtual memory is a memory management technique that provides an abstraction layer of the storage resources available on a system. While virtual memory comes in various implementations, they all fundamentally function by mapping memory access from logical locations to a physical one.
In January of 1983, Apple released the Lisa. It would soon be overshadowed by the release of the Apple MacIntosh one year later. The Macintosh product line would eventually grow dramatically over the years. The Macintosh ran on the Motorola 68K CPU.
What made the 68K so powerful was its early adoption of a 32-bit internal architecture. However, the 68k was not considered a true 32bit processor but more of a hybrid 32/16 processor. Despite these limitations, it proved to be a very capable processor.
Despite these limitations, the 68k did support a simple form of privileging that made hardware facilitated multitasking possible. The 68K always operates in one of two privilege states the user state or the supervisor state.
By the end of 1984, IBM took its next step forward with the release of its second generation of personal computer, the IBM PC AT.
Among some of the new software developed for the AT was a project by Microsoft called Windows. With initial development beginning in 1981, Windows 1.0 made its first public debut on November 10, 1983.
The 80286 was groundbreaking at that time in that it was the first mass-produced processor that directly supported multiuser systems with multitasking.
The first was the elimination of multiplexing on both data and address buses.
The second advancement was the moving of memory addressing control into a dedicated block of hardware.
The third major enhancement was an improved prefetch unit. Known as its instruction unit the 80286 would begin decoding up to 3 instructions from its 8-byte prefetch queue.
The 80286 was capable of addressing 24 bits of memory, or 16MB of RAM making the 8086 memory model insufficient.
To make use of the full 16MB as well as facilitate multitasking, the 80286 could also operate in a state known as protected mode.
Segment descriptors provide a security framework by allowing write protection for data segments and read protection for code segments. If segment rules are violated an exception occurs, forcing an interrupt trigger of operating system code.
The 80286’s MMU tracked all segments in two tables. The global descriptor table or GDT and the local descriptor table or LDT which combined could potentially address up to 1GB of virtual memory.
The interrupt structure of protected mode is very different from real mode in that it has a table of its own, known as the interrupt descriptor table.
During the 1960s into the 1970s, the multitasking paradigm was gaining traction in the mainframe world. Initially, the concept was implemented in a cruder form known as multiprogramming. Multiprogramming was accomplished by processing programs in batches, jumping between them during regions of code that wait for hardware input. This would eventual evolving into time-sharing.
By the late 1960s, true multitasking started to emerge in operating systems such as DEC’s PDP-6, IBM’s OS/360 MFT, and MULTICS. MULTICS would heavily influence the development of UNIX.
In a traditional single process environment, the program being executing generally has full control of the CPU and its resources. This creates issues with efficient CPU utilization, stability, and security as software grows more complex.
In multitasking, CPU focus is shuffled between concurrently running processes.
Cooperative multitasking was used by many early multitasking operating systems. Whenever a process is given CPU focus by the operating system, it relies on the process itself to return control back.
Preemptive multitasking solved the stability problems of cooperative multitasking by reliably guaranteeing each process a regular period or “time-slice” of CPU focus.
We also need a way to prevent a process from using memory allocated to another process but also allow them to communicate with each other safely. The solution to this is a layer of hardware dedicated to the task in between the CPU and RAM called a memory management unit or MMU.
If a process attempts to access memory outside of protection rules a hardware fault is triggered.
One some MMU’s the concept of memory access privileging is incorporated into memory management. By assigning levels of privilege to regions of memory, it becomes impossible for a process to access code or data above its own privilege level. This creates a trust mechanism in which less trusted, lower privilege code cannot tamper with more trusted, critical code or memory.
Virtual memory is a memory management technique that provides an abstraction layer of the storage resources available on a system. While virtual memory comes in various implementations, they all fundamentally function by mapping memory access from logical locations to a physical one.
In January of 1983, Apple released the Lisa. It would soon be overshadowed by the release of the Apple MacIntosh one year later. The Macintosh product line would eventually grow dramatically over the years. The Macintosh ran on the Motorola 68K CPU.
What made the 68K so powerful was its early adoption of a 32-bit internal architecture. However, the 68k was not considered a true 32bit processor but more of a hybrid 32/16 processor. Despite these limitations, it proved to be a very capable processor.
Despite these limitations, the 68k did support a simple form of privileging that made hardware facilitated multitasking possible. The 68K always operates in one of two privilege states the user state or the supervisor state.
By the end of 1984, IBM took its next step forward with the release of its second generation of personal computer, the IBM PC AT.
Among some of the new software developed for the AT was a project by Microsoft called Windows. With initial development beginning in 1981, Windows 1.0 made its first public debut on November 10, 1983.
The 80286 was groundbreaking at that time in that it was the first mass-produced processor that directly supported multiuser systems with multitasking.
The first was the elimination of multiplexing on both data and address buses.
The second advancement was the moving of memory addressing control into a dedicated block of hardware.
The third major enhancement was an improved prefetch unit. Known as its instruction unit the 80286 would begin decoding up to 3 instructions from its 8-byte prefetch queue.
The 80286 was capable of addressing 24 bits of memory, or 16MB of RAM making the 8086 memory model insufficient.
To make use of the full 16MB as well as facilitate multitasking, the 80286 could also operate in a state known as protected mode.
Segment descriptors provide a security framework by allowing write protection for data segments and read protection for code segments. If segment rules are violated an exception occurs, forcing an interrupt trigger of operating system code.
The 80286’s MMU tracked all segments in two tables. The global descriptor table or GDT and the local descriptor table or LDT which combined could potentially address up to 1GB of virtual memory.
The interrupt structure of protected mode is very different from real mode in that it has a table of its own, known as the interrupt descriptor table.
This exploration of cutting technology spans from prehistoric stone tools to modern computer-controlled machine tools, tracing how this fundamental concept has shaped human civilization and continues to evolve today.
The story begins in prehistoric times, with the first evidence of sharp tools dating back 2.6 million years. Early hominids used crude stone "choppers" to cut meat and work with wood, empowering them to create more advanced implements. The science of cutting involves separating materials through highly directed force, with the cutting tool needing to be harder than the material being cut.
The Bronze Age marked a revolution in cutting technology, as humans transitioned from stone to metal tools around 6000 BC. Copper's low melting point made it ideal for early metalworking, and the discovery of bronze alloys created harder, more durable cutting tools. This period also saw the rise of metallurgy, the study of metals' physical and chemical properties. Crystal lattice structure, dislocations, and grain boundaries are key concepts in understanding metal behavior. Techniques like alloying, heat treatment, and work-hardening improve metal properties for specific applications.
The Iron Age brought further advancements with improved furnace technology enabling iron smelting. Bloomeries produced workable iron by hot-forging below melting point, while blast furnaces increased production, creating cast iron for structural use. Puddling furnaces later allowed the production of wrought iron with lower carbon content.
The dawn of the Steel Age marked a turning point in cutting technology. Steel combined iron's strength with improved workability, and innovations like the Bessemer process and Open Hearth method made steel production more efficient and affordable. This led to the rise of industrial giants like US Steel, the world's first billion-dollar corporation.
Machine tools evolved from early developments like the bow lathe and water-powered boring mill to Maudslay's revolutionary screw-cutting lathe in 1800. Eli Whitney's milling machine in 1820 enabled mass production, and by 1875, the core set of modern machine tools was established. The mid-20th century saw the introduction of numerical control (NC) for automation, followed by computer numerical control (CNC) machines in the 1970s.
Advancements in cutting tool materials played a crucial role in this evolution. High-speed steel, introduced in 1910, addressed the limitations of carbon steel by maintaining hardness at higher temperatures. Carbide tools, developed from Henri Moissan's 1893 tungsten carbide discovery, combined extreme hardness with improved toughness. The manufacturing process of cemented carbides impacted tooling design, including the development of replaceable cutting inserts. Exotic materials like ceramics and diamonds found use in specific high-speed applications and abrasive machining.
Looking to the future, emerging non-mechanical methods like laser cutting and electrical discharge machining challenge traditional techniques. Additive manufacturing (3D printing) poses a further challenge to traditional subtractive processes. Despite these new technologies, mechanical cutting remains dominant due to its versatility and efficiency, with increasing automation and integration keeping it relevant in modern manufacturing.
From the first stone tools to today's computer-controlled machines, cutting has shaped the world in countless ways. As humanity looks to the future, the principles of cutting continue to evolve, adapting to new materials and manufacturing challenges. This journey through cutting technology offers insights into a fundamental process that has driven human progress for millennia, appealing to those interested in history, engineering, and the intricacies of how things are made.
The tech industry's obsession with AI is hitting a major limitation - power consumption. Training and using AI models is proving to be extremely energy intensive. A single GPT-4 request consumes as much energy as charging 60 iPhones, 1000x more than a traditional Google search. By 2027, global AI processing could consume as much energy as the entire country of Sweden. In contrast, the human brain is far more efficient, with 17 hours of intense thought using the same energy as one GPT-4 request. This has spurred a race to develop AI that more closely mimics biological neural systems.
The high power usage stems from how artificial neural networks (ANNs) are structured with input, hidden, and output layers of interconnected nodes. Information flows forward through the network, which is trained using backpropagation to adjust weights and biases to minimize output errors. ANNs require massive computation, with the GPT-3 language model having 175 billion parameters. Training GPT-3 consumed 220 MWh of energy.
To improve efficiency, research is shifting to spiking neural networks (SNNs) that communicate through discrete spikes like biological neurons. SNNs only generate spikes when needed, greatly reducing energy use compared to ANNs constantly recalculating. SNN neurons have membrane potentials that trigger spikes when a threshold is exceeded, with refractory periods between spikes. This allows SNNs to produce dynamic, event-driven outputs. However, SNNs are difficult to train with standard ANN methods.
SNNs perform poorly on traditional computer architectures. Instead, neuromorphic computing devices are being developed that recreate biological neuron properties in hardware. These use analog processing in components like memristors and spintronic devices to achieve neuron-like behavior with low power. Early neuromorphic chips from IBM and Intel have supported millions of simulated neurons with 50-100x better energy efficiency than GPUs. As of 2024, no commercially available analog AI chips exist, but a hybrid analog-digital future for ultra-efficient AI hardware seems imminent. This could enable revolutionary advances in fields like robotics and autonomous systems in the coming years.
The spark plug, a crucial component in gasoline internal combustion engines, has a rich history dating back to 1859 when Belgian engineer Jean Joseph Étienne Lenoir first used it in his coal gas and air engine. The design was refined by inventors like Nikola Tesla, Frederick Richard Simms, and Robert Bosch, with Bosch being the first to develop a commercially viable spark plug.
Spark plugs ignite the air-fuel mixture in the engine's combustion chamber by creating a spark between two electrodes separated by an insulator. The spark ionizes the gases in the gap, causing a rapid surge of electron flow that ignites the mixture, creating a controlled combustion event.
Early spark plugs used mineral insulators and had short lifespans. The introduction of sintered alumina in the 1930s improved insulation, strength, and thermal properties, allowing higher voltages and better self-cleaning capabilities. In the 1970s, lead-free gasoline and stricter emissions regulations prompted further redesigns, including the use of copper core electrodes to improve self-cleaning and prevent pre-ignition.
Multiple ground electrode plugs and surface-discharging spark plugs were explored in the following decades. The 1990s saw the introduction of coil-on-plug ignition systems and noble metal high-temperature electrodes, enabling higher voltages, stronger sparks, and longer service life.
Modern spark plugs also incorporate ionic-sensing technology, which allows the engine control unit to detect detonation, misfires, and optimize fuel trim and ignition timing for each cylinder. This level of control has pushed engine designs to be more efficient and powerful.
As electric vehicles become more prevalent, the spark plug's evolution may soon reach its end, with electricity both pioneering the emergence and likely ushering in the end of the internal combustion engine.
The evolution of automotive drivelines began centuries ago with horse-drawn implements, such as the Watkins and Bryson mowing machine, which introduced the first modern conceptualization of a driveshaft in 1861. Early automobiles primarily used chain drives, but by the turn of the century, gear-driven systems became more prevalent. The 1901 Autocar, designed by Louis S. Clarke, was considered the first shaft-driven automobile in the U.S., featuring a rear-end layout with a sliding-gear transmission, torque tube, and bevel gear assembly with an integrated differential. Autocar used a "pot type" universal joint, which was later superseded by the more robust Cardan universal joint, first used in the 1902 Spyker 60 HP race car.
Cardan universal joints, named after the Italian mathematician Gerolamo Cardano, consisted of two yokes connected by a cross-shaped intermediate journal, allowing power transmission between shafts at an angle. These joints used bronze bushings and later needle roller bearings to reduce friction and increase durability. Slip yokes were incorporated into the driveline assembly to accommodate axial movement. However, Cardan joints had limitations, such as non-uniform rotational speeds and increased friction at higher angles.
Throughout the 1920s, several design variations were developed to address these limitations. Ball and trunnion universal joints, like those used in the 1928 Chrysler DeSoto, allowed for greater angle misalignment and integrated slip characteristics. Double Cardan shafts, which used two universal joints connected by an intermediate propeller shaft, became a popular choice for rear-wheel drive vehicles due to their design flexibility, manufacturability, and torque capacity.
Constant velocity (CV) joints were introduced in the late 1920s to address the limitations of Cardan joints in front-wheel drive vehicles. The Tracta joint, invented by Jean-Albert Grégoire, was one of the first CV joints used in production vehicles. However, the most practical and popular design was the Rzeppa joint, invented by Ford engineer Alfred H. Rzeppa in 1926. Rzeppa joints used ball bearings to provide smooth power transfer at high angles. Tripod joints, developed in the 1960s, were commonly used on the inboard side of front-wheel drive half-shafts due to their affordability and ability to accommodate axial movement.
During the 1960s, manufacturers began experimenting with CV joints on propeller shafts for rear-wheel drive cars to achieve smoother power transfer. Double Cardan joints, which placed two Cardan joints back-to-back in a single unit, were also developed for use in high-articulation, high-torque applications.
Until the 1980s, drive shafts were primarily made from steel alloys. In 1985, the first composite drive shafts were introduced by Spicer U-Joint Division of Dana Corporation and GM. Composite drive shafts, made from carbon fiber or glass fiber in a polymer matrix, offered significant weight savings, high strength-to-weight ratios, and inherent damping properties.
As the automotive industry looks towards a future with alternative power sources, driveline components and universal joints remain crucial elements. Despite attempts to eliminate drivelines using hub electric motors, the traditional drivetrain layout is likely to remain dominant in the near future.
The fascinating evolution of automotive electrical systems traces back to the first mass-produced electrical system in the Ford Model T. Over its 19-year production, the Model T's electrical setup evolved from a simple magneto-powered ignition to incorporating elements found in modern vehicles. The narrative unfolds the transition from cloth-covered wires to advanced multipin and modular connectors, highlighting the technological leaps in automotive wiring.
In the early days, vehicles like the Ford Model T relied on cloth-covered, stranded copper wires, offering flexibility but limited durability. Early wiring faced challenges like moisture absorption and vulnerability to abrasion, leading to unreliable electrical systems. The introduction of rubber-covered wires presented a solution, albeit with its own set of drawbacks, such as brittleness over time.
The 1930s marked a significant shift with the introduction of bullet and spade terminals, eliminating the need for fasteners and allowing for more secure connections in tight spaces. This period also saw the advent of crimping, a method that enhanced connection reliability by avoiding soldering defects and improving resistance to vibration.
As vehicles became more complex, the need for efficient and reliable connectors grew. The aviation industry's adoption of circular connectors in the 1930s paved the way for similar advancements in automotive wiring. These connectors, characterized by their ruggedness and ease of use, set the stage for the standardization of components, ensuring reliability across various applications.
The introduction of synthetic polymers like PVC in the 1920s and 1930s revolutionized wire insulation, offering superior resistance to environmental factors. However, the evolving demands of automotive systems called for even more durable materials, leading to the adoption of advanced insulation materials in high-stress applications.
The 1950s saw vehicles integrating more amenities, necessitating the development of less costly, plastic-based multipin connectors. This period also marked the beginning of the transition towards electronic management systems in vehicles, significantly increasing wiring complexity.
By the 1980s, the need to transmit digital and analog signals efficiently led to the adoption of materials with low dielectric constants, minimizing signal loss. The era also welcomed the Controller Area Network (CAN) bus protocol, a robust communication system that allowed multiple electronic devices to communicate over a single channel.
The 1990s and beyond have seen vehicles adopting mixed network systems to cater to varied subsystem requirements, from critical controls to infotainment. The advent of advanced driver assistance systems (ADAS) and the shift towards electric vehicles (EVs) have introduced new challenges and standards in automotive wiring, emphasizing safety and efficiency in high-voltage environments.
In this captivating journey through history, we explore the evolution of cable management and the birth of cable ties, a seemingly simple yet revolutionary invention. The narrative begins in the late 19th century when electrical advancements were transforming New York City. Enter Robert M. Thomas and Hobart D. Betts, Princeton University students turned entrepreneurs, who paved the way for the future of electrical infrastructure.
Fast forward to the 1950s, where Maurus C. Logan, a Scottish immigrant working with Thomas and Betts, witnessed the intricate process of cable lacing in Boeing aircraft manufacturing. Cable lacing, a century-old technique, involved using waxed linen cords to neatly secure cable bundles, primarily in telecommunications. Logan, determined to simplify this labor-intensive process, spent two years developing what would become the modern cable tie.
Logan's breakthrough came in 1958 with a patent submission for a nylon strap with an integrated oval aperture, designed to loop around cables and secure itself through friction. Despite initial indecisiveness on the latching mechanism, Logan's design marked the birth of the cable tie. Thomas and Betts further refined the design, leading to the iconic Ty-Rap cable tie, patented in 1962, with lateral locking grooves and an embedded steel locking barb for enhanced security.
The cable tie's success led to legal disputes, as its design closely resembled a British patent by Kurt Wrobel. Nevertheless, Thomas and Betts prevailed in the market, solidifying their claim as the inventors of the cable tie.
The Ty-Rap cable tie evolved into specialized versions, including heat-resistant and space-grade variants. Offshoot products like Ty-Met, made of stainless steel, and Ty-Fast, a nylon tie with an integrated ratchet barb, gained popularity globally, earning the colloquial name "zip ties" or "tie wraps."
Today, over 45 companies globally produce cable ties, with an estimated annual production of 100 billion units. Thomas and Betts, now ABB Installation Products, continue to be a key player in the cable tie market, with ongoing developments for niche applications.
Maurus Logan, the visionary behind the cable tie, dedicated his career to innovation, filing six patent applications and rising to the role of Vice President of Research and Development. His legacy lives on as cable ties have become an integral part of our modern world, found everywhere from the ocean floor to the surface of Mars, silently playing a crucial role in powering our information-driven world and beyond.
In the early days of aviation both the civil and military world, a practical method for traversing large distances was highly sought after. While airframe and engine designs were constantly evolving, air-to-air refueling was seen as the only immediate solution to the range extension problem, particularly for military applications.
The first attempts at air-to-air refueling were carried out as dangerous stunts performed by civilian pilots known as barnstormers at flying circuses. The first true systematic attempt of inflight refueling was conducted on October 3, 1920 in Washington D. Cabot of the United States Naval Reserve. Finally, in 1923, WW I veteran pilots Captain Lowell Smith and Lieutenant John Richter, would devise a method to deal with the flight duration limits that plagued them during combat. A few months later, numerous test flights were flown over a circular course, with the team achieving their first flight endurance record on June 27th, at 6 hours and 39 minutes of flight time.
Using the refueling technique developed by Smith and Ricther, the tankers carried a 50 foot hose that would be lowered to the receiver aircraft, which itself was modified with a large fuel funnel that led to its fuselage tank. Throughout the entire flight, forty-two contacts were made with the tankers, with almost 5,000 gallons of gasoline and 245 gallons of oil being transferred.
By 1935, Cobham's would demonstrate a technique known as grappled-line looped-hose air-to-air refueling. In this procedure, the receiver aircraft would trail a steel cable which was then grappled by a line shot from the tanker. The line was then drawn into the tanker, where the receiver's cable was connected to the refueling hose. Once the hose was connected, the tanker climbed slightly above the receiving aircraft where fuel would flow under gravity. By the late 1930s, Cobham company, Flight Refuelling Ltd or FRL would become the very first producer of a commercially viable aerial refueling system.
In March of 1948, the USAF’s Air Material Command initiated the GEM program, in the hopes of developing long range strategic capabilities through the study of aircraft winterization, air-to-air refueling and advanced electronics. The air-to-air refueling program in particular was given top priority, within GEM. After a year of training and testing with the modified FRL air-to-air refueling system, it would be used by the B-50 Superfortress "Lucky Lady II" of the 43rd Bomb Wing to conduct the first non-stop around-the-world flight.
The solution to the problem came in the form of a flying boom refueling concept. The flying boom aerial refueling system is based on a telescoping rigid fueling pipe that is attached to the rear of a tanker aircraft. The entire mechanism is mounted on a gimbal, allowing it to move with the receiver aircraft. In a typical flying boom aerial refueling scenario, the receiver aircraft rendezvous with the tanker, and maintain formation.
The receiver aircraft then moves to an in-range position behind the tanker, under signal light or radio guidance from the boom operator. Once in position, the operator extends the boom to make contact with the receiver aircraft where fuel is then pumped through the boom.
Simultaneously, Boeing would develop the world's first production aerial tanker, the KC-97 Stratofreighter. . Over the next few years, these Boeing would develop the first high-altitude, high-speed jet-engine powered flying-boom aerial tanker, the KC-135 Stratotanker.
By 1949 Cobham had devised the first probe and drogue aerial refueling system. Probe-and-drogue refueling employs a flexible hose that trails behind the tanker aircraft. During aerial refueling, the drogue stabilizes the hose in flight and provides a funnel to guide the insertion of a matching refueling probe that extends from the receiver aircraft.
When refueling operations are complete, the hose can is then reeled up completely into an assembly known as the Hose Drum Unit. Operational testing of the first probe-and-drogue refueling system began in 1950.
On June 4th, 2021, The US Navy conducted its first-ever aerial refueling between a manned aircraft and an unmanned tanker, using a Boeing MQ-25 Stingray and a Navy F-18 Super Hornet. Conducted over Mascoutah, Illinois the 4 and half hour test flight performed a series of both wet and dry contacts with the UAV, with a total of 10 minutes of total contact time and transferring around 50 gallons of fuel.
Discover the incredible journey of gyroscopes in transforming navigation and the aerospace industry. From historic sea voyages to the cutting-edge technology in modern aviation and space exploration, this video unveils the fascinating evolution of gyroscopes. Dive into the origins with HMS Victory's tragic loss and John Serson's pioneering work, to the groundbreaking inventions of Bohnenberger, Johnson, and Foucault. Explore the fundamental principles of gyroscopes, their role in the development of gyrocompasses by Anschütz-Kaempfe, and their critical application in early 20th-century aviation and warfare technologies. Learn about the vital transition during World War II to sophisticated inertial navigation systems (INS) and their pivotal role in rocketry, especially in the German V2 and American Atlas rockets. Understand the mechanics of INS, the challenge of drift, and the advancements in computing that led to its refinement. Discover how the aviation industry embraced INS, from the B-52's N-6 system to the Delco Carousel in commercial aviation. Witness the emergence of new gyroscopic technologies like ring laser and fiber-optic gyroscopes, and their integration with GPS for unprecedented navigational accuracy. Explore the latest advancements in Micro-Electro-Mechanical Systems (MEMS) and their widespread application in consumer electronics. Finally, envision the future of gyroscopes in enhancing virtual reality, autonomous vehicles, and motion-based user interfaces. This comprehensive overview not only traces the history but also forecasts the exciting future of gyroscopes in our increasingly digital and interconnected world.
In this comprehensive exploration of randomness, we delve into its perplexing nature, historical journey, statistical interpretations, and pivotal role in various domains, particularly cryptography. Randomness, an enigmatic concept defying intuition, manifests through seemingly unpredictable sequences like coin flips or digits of pi, yet its true nature is only indirectly inferred through statistical tests.
The historical narrative reveals humanity's earliest encounters with randomness in gaming across ancient civilizations, progressing through Greek philosophy, Roman personification, Christian teachings, and mathematical analysis by Italian scholars and luminaries like Galileo, Pascal, and Fermat. Entropy, introduced in the 19th century, unveiled the limits of predictability, especially in complex systems like celestial mechanics.
Statistical randomness, derived from probability theory, relies on uniform distribution and independence of events in a sample space. However, its limitation lies in perceivable unpredictability, as exemplified by the digits of pi or coin flips, which exhibit statistical randomness yet remain reproducible given precise initial conditions.
Information theory, notably Claude Shannon's work, established entropy as a measure of uncertainty and information content, showcasing randomness as the opposite of predictability in a system. Algorithmic randomness, introduced by von Mises and refined by Kolmogorov, measures randomness through compressibility but faces challenges due to computability. Martin-Löf's work extends this notion by defining randomness based on null sets.
The integration of randomness into computer science led to the emergence of randomized algorithms, divided into Las Vegas and Monte Carlo categories, offering computational advantages. Encryption, crucial in modern communications, relies on randomness for secure key generation, facing challenges due to vulnerabilities in pseudorandom algorithms and hardware random number generators.
The evolution of cryptography, from DES to AES and asymmetric-key algorithms like RSA, emphasizes the critical role of randomness in securing digital communications. While hardware random number generators harness inherent physical unpredictability, they face challenges regarding auditability and potential vulnerabilities.
The future of randomness lies in embedded quantum random number generators, promising heightened security, while encryption algorithms adapt to counter emerging threats posed by quantum computing's properties.
This in-depth exploration captures the historical, theoretical, and practical dimensions of randomness, highlighting its significance in diverse fields and its pivotal role in securing modern communications.
Explore the fascinating world of foam in this in-depth exploration of its history and properties. From its natural occurrences in sea foam and whipped egg whites to its critical role in modern manufacturing, foam has evolved over centuries. Learn about its structure, stability, and the essential role of surfactants in foam formation. Discover the historical journey of foam, from natural cellular solids like cork to the development of manufactured foams in the late 1800s. Dive into the creation of foam latex and the rise of polymeric foams, including the iconic Styrofoam and versatile polyurethane foams. Understand the environmental concerns surrounding foam products and the ongoing efforts to make them more sustainable. Explore exotic foam compositions like syntactic foams and metal foams, showcasing foam's diverse applications in extreme environments. Join us on this educational journey into the complex and intriguing world of foam.
In October 2006, A team of British and U.S. scientists had demonstrated a breakthrough physical phenomena, then only known to science fiction; the world’s first working "invisibility cloak”. The team, led by Professor Sir John Pendry, created a small device about 12 cm across that had the intrinsic property of redirecting microwave radiation around it, rendering it almost invisible to microwaves.
What made this demonstration particularly remarkable was that this characteristic of microwave invisibility was not derived from the chemical composition of the object but rather the structure of its constituent materials. The team had demonstrated the cloaking properties of a meta-material.
WHAT ARE THEY A metamaterial is a material purposely engineered to possess one or more properties that are not possible with traditional, naturally occurring materials. Radiation can be bent, amplified, absorbed or blocked in a manner that far supersedes what is possible with conventional materials.
PROPERTIES OR REFRACTION The refractive index of a material varies with the radiation’s wavelength, which in turn also causes the angle of the refraction to vary. Every known natural material possesses a positive refractive index for electromagnetic waves. Metamaterials however, are capable of negative refraction.
HOW REFRACTION IS CONTROLLED Permittivity is a measure of how much a material polarizes in response to an applied electric field while magnetic permeability is the measure of magnetization that a material obtains in response to an applied magnetic field. As an electromagnetic wave propagates through the metamaterial, each unit responds to the radiation and the collective results of these interactions creates an emergent material response to the electromagnetic wave that supersedes what is possible with natural materials.
FIRST CONCEPTS The first mention of the properties of metamaterials was in 1904, with the conceptualization of negative wave propagation by British mathematician Horace Lamb and British physicist Arthur Schuster. Veselago’s research included producing methods for predicting the phenomena of refraction reversal, in which he coined the term left-handed materials.
ARTIFICIAL DIELECTRICS From this, the development of artificial dielectrics during the 1950s and 1960s, began to open up new ways to shape microwave radiation, especially for radar antennae design. Artificial dielectrics are composite materials made from arranged arrays of conductive shapes or particles, supported in a nonconductive matrix. Similar to metamaterials, artificial dielectric are designed to have a specific electromagnetic response, behaving as an engineered dielectric material.
FIRST METAMATERIALS Pendry’s expertise in solid state physics had led him to be contracted by Marconi Materials Technology in order to explain the physics of how their naval stealth material actually worked. Pendry had discovered that the microwave absorption of the material did not come from the chemical structure of the carbon it was made from but rather the long, thin shape of the fibers. He had figured out how to manipulate a materials electric and magnetic response, effectively allowing for a method to engineer how electromagnetic radiation moves through a material.
SUPERLENS By late 2000, Pendry had proposed the idea of using metamaterials to construct a superlens. Pendry theorized that one could be developed employing the negative refractive index behavior of a metamaterial. However, in practice, this proved to be an incredibly difficult task due to the resonant nature of metamaterials. By 2003, Pendry's theory was first experimentally demonstrated at microwave frequencies, by exploiting the negative permittivity of metals to microwaves.
CLOAKING Composed of 21 alternating sheets of silver and a glasslike substance, the material, referred to as a fishnet, causes light to bend in unusual ways as it moves through the alternating layers. What made this particularly notable was that it operated on a wider band of radiation than their previous attempts.
FUTURE OF CLOAKING Despite the ongoing research and relative success with microwave radiation, to date optical cloaking still remains elusive due to the technical challenges of manipulated light within a metamaterial Light moving through materials typically gets absorbed until, at some point, the energy of the radiation falls off, making it a challenge to guide it’s propagation in a useful way.
SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmindDriving On Compressed Air: The Little-Known Compressed Air RevolutionNew Mind2023-08-26 | ▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription
In March 2020, Reza Alizade Evrin and Ibrahim Dincer from the University of Ontario Institute of Technology's Clean Energy Research Lab pioneered an innovative vehicle prototype fueled by compressed air, using readily available components. This prototype showcased remarkable energy efficiency, reaching up to 90% of a lithium-ion electric vehicle's efficiency and predicting a range of around 140 kilometers. While surpassable by current electric vehicles, the real breakthrough was the exclusive use of compressed air as an energy source.
The history of compressed air vehicles dates back to the early 19th century when the concept of harnessing compressed air's power for vehicles emerged. Despite early breakthroughs like Louis Mékarski's compressed air locomotive in the 1860s, practical applications were limited. Mining operations and tunnel constructions adopted compressed air vehicles due to their safety advantages, but they couldn't compete with internal combustion engines.
Compressed air storage systems faced inherent flaws, with conventional methods wasting energy due to heat loss during compression and cooling during expansion. Adiabatic and isothermal storage techniques were explored to improve efficiency, particularly for utility power storage. Researchers like Evrin and Dincer delved into near-isothermal compressed air storage, enhancing thermodynamic limits for vehicle applications using phase change materials.
Advantages of compressed air vehicles include potential fourfold energy storage compared to lithium-ion batteries, direct mechanical energy conversion, quiet and lightweight turbine-based motors, and sustainability due to minimal toxic materials and reduced manufacturing complexity. Tankage solutions vary between low-pressure and high-pressure systems, utilizing lightweight composite tanks that are safer and cheaper to produce compared to batteries.
The challenge of designing efficient air motors led to innovations like EngineAir's Di Pietro Motor, addressing torque inconsistencies through a rotary positive displacement design. However, achieving consistent torque across pressure ranges remained an obstacle.
Commercialization history saw ups and downs. French engineer Guy Negre proposed the idea in 1996, leading to prototypes like MDI's "OneCAT" and partnerships with companies like Tata Motors. However, challenges including safety concerns and governmental support for electric and hybrid vehicles hindered mass adoption. MDI's AirPod 2.0, introduced in 2019, featured hybrid refueling and improved speeds, yet production plans remained uncertain.
Despite the journey's challenges, MDI persists in the pursuit of compressed air vehicle commercialization, aiming to revolutionize transportation with this sustainable technology.
FOOTAGE Traveling Tom - 1906, HK Porter, Compressed air mine locomotive demonstration Infinite Composites Technologies Angelo Di Pietro
The Czinger 21C hypercar concept incorporates a revolutionary brake node, a combination of braking system and suspension upright, using Divergent 3D's DAPS system. DAPS utilizes Metal Additive Manufacturing and generative design powered by AI to create highly optimized structures. Generative design explores numerous solutions based on defined parameters, producing innovative designs. It can optimize parts while considering various constraints and objectives.
Generative design methods include Cellular Automata, Genetic Algorithms, Shape Grammar, L-Systems, and Agent-Based Models. Cellular Automata use mathematical models with discrete cells and predefined rules to create emergent patterns. Genetic Algorithms simulate natural selection to evolve solutions in iterative generations. Shape Grammar employs a vocabulary of basic shapes and rules to create diverse designs. L-Systems model growth and complex structures using symbols and iterative rules. Agent-Based Models simulate interactions of autonomous agents, producing emergent patterns and system-level dynamics.
These generative design methods find application in various industries, including architecture, automotive, and aesthetics. They help optimize components, such as connecting rods, lattice patterns, taillights, and suspension systems, improving performance while reducing weight. However, the use of generative design is still developing, with advancements in AI and computational models continually expanding its capabilities.
In the future, AI-driven generative design could revolutionize engineering and design processes, surpassing human capabilities and rapidly producing highly efficient and complex designs. It has the potential to redefine the roles of engineers and designers, leading to more innovative and optimized products in various fields.
The inspiration for the Wankel rotary engine is derived from the geometric principle that when a circle is rolled on its own circumference along another circle that has double the radius, a curve known as an Epitrochoid is created. This curve forms the shape of the inner walls of the rotor housing. The rotor housing hosts all stages of the rotary engine’s combustion cycle, much like a cylinder in a conventional engine.
In order to keep compression in the chamber of a Wankel engine, the three tips of the rotor must form gas-tight seals against the inner walls of the rotor housing. This is accomplished by seals at the three apexes of the triangle, known as apex seals. These seals are usually made of metal and are pushed against the wall housing by springs. Since the seals are in contact with the housing’s inner case, in order to reduce friction they’re covered in engine oil. Because the exposure of engine oil to the combustion process, a rotary engine burns oil by design. The amount of oil used is metered by a throttle-controlled metering pump.
The three apexes of the triangular-shaped rotor move uniformly along the inside walls of the rotor housing, dividing the cavity between the rotor and the interior walls of the housing into three continually changing regions of volume. Because of the unique configuration of a rotary engine, they’re classified as variable-volume progressing-cavity systems. Each rotor has three faces and each face has three cavities of volume per housing. In effect, each face of the rotor «sweeps» its own volume as the rotor moves in an eccentric orbit within the housing.
Each side of the rotor is brought closer to and then further away from the wall of the internal housing, compressing and expanding the combustion chamber. A rotor is effectively akin to a piston.
Starting in the early 1960s, Mazda has released a slew of unique, Wankel rotary powered models such the Cosmo, RX-3 and three generations of the Mazda RX-7. The iconic history of Mazda and the evolution of the Wankel rotary engine began with a joint study contract between Mazda and the German car firm NSU. Which came equipped with a water-cooled single-rotor engine and standard front disc brakes, which differentiated it from other similar cars of the period. Early cars required an engine rebuild only after 50,000 kilometers or 31,000 miles. Many of these failures were attributed to poorly designed apex seal tips, a common weak point later realized in rotary engines.
Since the seals are in contact with the housing’s inner case, in order to reduce friction they’re covered in engine oil. Because the exposure of engine oil to the combustion process, a rotary engine burns oil by design. Because of the direct contact of apex seal, the biggest obstacle engineers faced in initial designs were the chatter marks on the rotor housing’s sliding surfaces. To an extent, these carbon seals were self-lubricating, addressing the issues facing the rotor housing wall surface.
They were also used in conjunction with an aluminum rotor housing, in which the walls were chrome-plated for durability. What made this possible was the new porous chrome plating on the interior walls of the rotor housing. Ths surface finish of this plating improved the effectiveness of the lubrication between the apex seal and the rotor.
From 1975 -1980 it was discovered that the current apex seal version was subjected to high thermal and centrifugal loads during high RPM operation and under periods of high engine load. To rectify this issue, Mazda implemented a slight crown of . This additional crowning compensated for the rotor housing’s slight deformation under high loads, ensuring sufficient contact with the rotor housing walls. Mazda also improved the corner pieces by incorporating a spring design to keep the clearance of the rotor groove at a minimum.
By the early 1980s, further refinements by Mazda led to the adoption a top-cut design that extended the main seal. The purpose was to reduce gas leakages at one end of the apex seal, where it would segment into two pieces. From 1985 to 2002, the apex seal had been further reduced in size to 2mm. Additionally, Mazda filled the center cavity of the spring corners with a heat-resistant rubber epoxy, adding additional sealing properties.
This latest iteration of the apex seal design was used in Mazda’s iconic high output, low weight twin turbocharged 13B-REW engine. Made famous by the 3d generation RX-7, it was used until the engine was finally dropped from production and replaced with the Renesis engine which used its own apex seal design. The apex seal in the Renesis engine was now a two-piece design made from cast iron with a low carbon content.
Explore the fascinating world of unconventional computers that defied the norms of their time, revolutionizing diverse fields from engineering to economics, torpedo guidance, digital logic, and animation. From Lukyanov's ingenious Water Integrator solving complex equations using water flow to Moniac's hydraulic macroeconomics modeling, delve into the Torpedo Data Computer's role in WWII, the conceptual marvel of Domino Computers, and the pioneering analog magic of Scanimate in producing early motion graphics. Witness how these unconventional machines shaped industries, solving complex problems in ways that predated the modern era of computing.
Gasoline is a mixture of light hydrocarbons with relatively low boiling points that, at the time, had no significant commercial value and was even seen as dangerous, due to its high volatility. Because of this, It was initially considered a waste product and was often discarded and simply burned off.
COMPOSITION OF GASOLINE Despite its public perception, gasoline is not a clearly defined compound but rather a homogenous blend of light to medium molecular weight hydrocarbons. The hydrocarbon types that commonly combine to form gasoline and contribute to its properties as a fuel, are paraffins, olefins, naphthene, and aromatics. Depending on the blend, gasoline can vary anywhere from 32 to 36 megajoules per liter.
EARLY GASOLINE Early gasoline produced directly from distillation was known as straight-run gasoline. When gasoline containing sulfur is burned are a major contributor to smog, acid rain, and ground-level ozone. These early gasoline blends, by today’s standards would be unusable in the higher compression engines of today as even the most high-test blends would have an octane ratings below 70, with lesser quality blends going as low as 40.
CRACKING By 1910, the rising demand for automobiles combined with the expansion of electrification, created a flip in the product demands of the petroleum industry, with the need for gasoline now beginning to supersede that of kerosene. Coined the Burton process, this technique thermally decomposes straight-run gasoline and heavier oils, cracking the heavier hydrocarbons and depleting their hydrogen to produce more lighter hydrogen rich hydrocarbons. The instability of fuel was also a concern, as the higher levels of unsaturated hydrocarbons produced by thermal cracking were reactive and prone to combining with impurities, resulting in gumming, further exacerbating the problem.
CATALYTIC CRACKING In early 1920s, Almer McDuffie McAfee would develop a new refining process that could potentially triple the gasoline yielded from crude oil by existing distillation methods. Known as catalytic cracking, the process heats heavy hydrocarbon feedstock to a high temperature along with a catalyst in a reactor. The catalyst initiates a series of chemical reactions that break the hydrocarbon molecules apart into smaller fragments that are then further cracked and recombined to produce lighter, more desired hydrocarbons for gasoline.
Catalytic cracked gasoline had a significantly higher olefin content, and more branched-chain and aromatic hydrocarbons than thermally cracked gasoline, which raised its octane rating. The catalyzing action also produced a fuel with lower sulfur and nitrogen content, which results in lower emissions when burned in engines.
FLUID-CRACKING In an attempt to circumvent Houndry patents, Standard Oil began researching an alternative method to catalytic cracking, resulting in the development and fielding of the fluid based catalytic cracking process in the early 1940s. As the catalyst becomes deactivated by build up of carbon deposits caused by the cracking process, the spent catalyst is separated from the cracked hydrocarbon products and sent to a regeneration unit.
HYDRO CRACKING During this time period, a new type of catalytic cracking process based on decades of research on hydrogenation, a reaction where hydrogen was used to break down large hydrocarbon molecules into smaller ones while adding hydrogen atoms to the resulting molecules. Its efficiency at producing higher yields of gasoline from heavier oil products led to it being adopted on a commercial scale by refineries around the world during the 1960s.
POST LEAD After the phase-out of lead additives in gasoline, the petroleum industry switched to MTBE. MTBE in particular. This phase out of MTBE led to ethanol becoming the primary oxygenate and octane booster in gasoline by the early 2000s.
ALKYLATION Beyond additives the process of alkylation also grew in its use to boost octane-ratings. This technique is used to produce alkylates, a high-octane blending component for gasoline. Much like other catalytic process, The acid catalyst is separated and recycled, while the alkylates are separated and unreacted isobutane recycled. The high-octane alkylate is then blended with other gasoline components.
ISOMERIZATION Another similarly catalytic technique that began to grow in popularity is gasoline isomerization. This process typically focuses on the conversion of low-octane straight-chain paraffins found in light naphtha into branched-chain hydrocarbons that have a higher octane rating.
UAVs operate in the world of tactical intelligence, surveillance and reconnaissance or ISR, generally providing immediate support for military operations often with constantly evolving mission objects. Traditionally, airborne ISR imaging systems were designed around one of two objectives, either looking at a large area without the ability to provide detailed resolution of a particular object or providing a high resolution view of specific targets, with a greatly diminished capability to see the larger context. Up until the 1990s, wet film systems were used on both the U2 and SR-71. Employing a roll of film 12.7 cm or 5" wide and almost 3.2 km or 2 miles long, this system would capture one frame every 6.8 seconds, with a limit of around 1,6000 frame captures per roll.
BIRTH OF DIGITAL The first digital imaging system to be used for reconnaissance was the optical component of the Advanced Synthetic Aperture Radar System or ASARS. Installed on the U-2 reconnaissance aircraft in the late 1970s, ASARS used a large, phased-array antenna to create high-resolution images of the ground below using radar. Complementing the radar, was an imaging system that used a Charge-coupled device or CCD camera to capture visible light images of the terrain being surveyed. This CCD camera operated in synchronization with the radar system and had a resolution of around 1 meter or 3.3 feet per pixel.
A CCD sensor consists of an array of tiny, light-sensitive cells arranged in an array. When combined with the limitation of computing hardware of the time, their designs were generally limited to less than a megapixel, with resolutions as low as 100,000 pixels being found in some systems.
CMOS By the early 1990s, a new class of imagining sensor called active-pixel sensors, primarily based on the CMOS fabrication process began to permeate the commercial market. Active-pixel sensors employ several transistors at each photo site to both amplify and move the charge using a traditional signal path, making the sensor far more flexible for different applications due to this pixel independence. CMOS sensors also use more conventional, and less costly manufacturing techniques already established for semiconductor fabrication production lines.
FIRST WAMI Wide Area Motion Imagery takes a completely different approach to traditional ISR technologies by making use of panoramic optics paired with an extremely dense imaging sensor. The first iteration of Constant Hawk’s optical sensor was created by combining 6 - 11 megapixel CMOS image sensors that captured only visible and some infrared light intensity with no color information.
At an altitude of 20,000 feet, the "Constant Hawk" was designed to survey a circular area on the ground with a radius of approximately 96 kilometers or 60 miles, covering a total area of over 28,500 square kilometers or about 11,000 square miles. Once an event on the ground triggers a subsequent change in the imagery of that region, the system would store a timeline of the imagery captured from that region. This now made it possible to access any event at any time that occurred within the system’s range and the mission’s flight duration. The real time investigation of a chain of events over a large area was now possible in an ISR mission.
In 2006 Constant Hawk became the first Wide Area Motion Imagery platform to be deployed as part of the Army’s Quick Reaction Capability to help combat enemy ambushes and improvised explosive devices in Iraq. In 2009, BAE System would add night vision capabilities and increase the sensor density to 96 megapixels. In 2013, full color imagery processing capability would be added.
The system was so successful that the Marine Corps would adopt elements of the program to create its own system called Angel Fire and a derivative system called Kestrel.
ARGUS-IS As Constant Hawk was seeing its first deployment, several other similar systems were being developed that targeted more niche ISR roles, however one system in particular would create a new class of aerial surveillance, previously thought to be impossible. Called the ARGUS-IS, this DARPA project, contracted to BAE Systems aimed to image an area at such high detail and frame rate that it could collect "pattern-of-life" data that specifically tracks individuals within the sensor field. The system generates almost 21 TB of color imagery every second. Because ARGUS-IV is specifically designed for tracking, a processing system derived from the Constant Hawk project called Persistics was developed.
Because this tracking can even be done backwards in time, the system now becomes a powerful tool for forensic investigators and intelligence analysis of patterned human behavior.
-- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmindThe strange origin of the cardboard box.New Mind2023-02-20 | ...The Most Complex System In Modern CarsNew Mind2023-02-11 | ▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription
An airbags, in its most elemental forms is an automotive safety restraint system designed to inflate a cushioning bag extremely quickly, then rapidly deflate it in a controlled manner, during a collision. They’re considered a passive restraint system, because unlike seatbelts, they require no interaction by the occupant for their operation.
SYSTEM DESIGN An airbag system is fundamentally composed of one or more inflation mechanisms located primarily within the steering wheel for the driver and the upper dashboard for the front passenger. These inflation mechanisms are controlled by a centralized system that continuously monitors for impact events using as little one to dozens of sensors, depending on the system’s sophistication. Once this system detects an impact, one or several inflation mechanisms are pyrotechnically triggered by an electrical signal, causing a gas generating propellant to be ignited, rapidly inflating a bag that is folded within each inflation mechanism. While simple in concept, the difference between an airbag’s deployment protecting an occupant, and causing traumatic or even deadly injuries, comes down to the precise millisecond timing of its operation.
ANATOMY OF AN COLLISION This incredibly narrow window to act within the first ⅓ of the entire collision duration is due to the airbags needing to deploy before the occupants contact any portion of the vehicle interior as it crushes, and before the limits of the seat belt’s stretch are reached. The airbag’s inflation must also be timed so that it is fully inflated before the occupant engages with it, to minimize trauma from the inflation process itself.
COMPRESSED AIR Both systems were based on a store of compressed airbags that would inflate the airbag using mechanical trigger valves. By the 1960s, practical airbag systems for vehicles were being explored by the major manufacturers and from this decade of research it was determined that compressed air systems were far too slow reacting to be effective. These flaws made the mechanical compressed air airbag system completely unsuitable for commercial adoption.
A BREAKTHROUGH Allen K Breed would make a breakthrough that finally made airbags commercially viable, with the development of the ball-in-tube electromechanical crash detection sensor. When a collision occurs, the ball is separated from the magnet, moving forward to electrical contacts and closing the trigger circuit. Breed also pioneered the use of a gas-generator as a method for rapidly inflating an airbag. Breed devised an inflation mechanism that used just 30-150 grams of the solid-fuel, sodium azide as a gas generating agent for airbags. The sodium azide would then exothermically decompose rapidly to sodium and nitrogen, fully inflating the airbag with the resultant gas, within just 60-80 milliseconds.
AIRBAG HISTORY Any car sold in the United States must now be certified to meet the Federal Motor Vehicle Safety Standards or FMVSS, a comprehensive set of regulations on vehicle design, construction, and performance. The NHTSA began to prepare for a second wave of mandates during the 1970’s, specifically targeting a push for new safety technologies, with the airbag being a prime technology for regulatory compliance. The first mass-produced vehicle to have an airbag system was introduced on a government-purchased in 1973. Called the The Air Cushion Restraint System or ACRS, General Motors employed impact sensors mounted in the vehicle's front bumper in order to deploy the airbags embedded in the steering wheel, for the driver, and in the dashboard for the passenger.
By 1984, the NHTSA would reach a compromise with the industry, and with this agreeing to the introduction of a passive restraint system mandate for all new vehicles produced in the US, beginning on April 1, 1989. Manufacturers had 2 options, either an automatic seat belt system or the airbag.
The 1980s saw the shift of the industry's view of airbag as a primary safety system to one designated as a supplemental restraint system or SRS, or the less common designation of supplemental inflatable restraints or SIR.
THE NEXT WAVE OF AIRBAG TECHNOLOGY This proliferation led to the development of a new generation of airbag systems during the 1990s that overcame the flaws of earlier systems through the use of recent breakthroughs in the semiconductor industry.
ALGORITHMIC CRASH DETECTION The electronic control unit that formed the backbone of airbag systems, called the airbag control unit or ACU, would now become an embedded computer, relying on a fusion of MEMS sensor data and other vehicle inputs, to employ algorithms that could now manage a larger spectrum of collision types and inflation response profiles.
A modern head gasket is an intricate hybrid mechanical seal engineered to fill the space between a reciprocating engine’s head and block.
SEALING ENGINE LUBRICANT A head gasket must seal the passages that carry engine oil between the block and the head. Engine oil can vary dramatically in viscosity and temperature, ranging from the extreme lows of frigid ambient temperature to as high as 135°C or 275°F.
SEALING ENGINE COOLANT Similar to engine oil, on most water-cooled engines a head gasket must also seal the passages that carry engine coolant between the head and the block. When compared to engine oil, engine coolant has a relatively consistent viscosity, with a lower maximum temperature of around 120°C or about 250°F, with normal operation seldom reaching above 140°C or 220°F. Much like with engine oils, the materials that seal engine coolant, on top of thermal cycling and movement, must deal with the corrosive properties of engine coolant.
SEALING COMBUSTIONS GASSES Sealing combustion gasses are, by far, the most brutal and critical requirement of a head-gasket. A head gasket forms part of the combustion chamber and If this seal is compromised, the affected cylinder will lose the ability to produce a normal combustion sequence. Depending on the nature of this failure, the cylinder may also consume or cross contaminant other engine fluids.
STABILITY A head-gasket must be deformable enough to maintain a seal between the imperfections of the head and block surfaces. In addition to these forces, head-gaskets have to function under the dynamics and extreme mechanical stresses of combustion pressure. The head bolts that fasten the head to the block are also typically not symmetrically spaced, creating an unevenly distributed clamping force across the gasket. With each of these bolts exerting a force of up to 4,500kg or about 10,000 lbs. Beyond these expectations, they must also be durable and capable of lasting across a significant portion of the engine’s life with little to no maintenance.
FIRST HEAD GASKETS With the introduction of the internal combustion engines in the 1860s, almost every type of elastic material ever used within steam engines was experimented to seal combustion. As the internal combustion engine transitioned from its experimental early days to a mass produced power plant, copper would become a popular material for these early head gaskets. Their relative motion would create inconsistencies in the clamping force along the gasket’s surface. This was such a problem, that In the early days of motorsport head-gasket failure was the most common reason for race cars to not finish a race.
NEW GASKET TECHNOLOGY As the automotive industry began to flourish in the 1920s and 30s, less costly, mass-production friendly head-gasket designs were explored. One durable yet relatively inexpensive option was the steel shim head-gasket. Embossments are stamped, raised regions on critical sealing areas off a gasket that created a smaller contact point.
COMPOSITE GASKET The beater-add process offered a new lower cost gasket material option that would lead to manufacturers eventually introducing the composite head-gasket in the late 1940s. Metal beads, called fire rings, are created within the gasket’s metal structure to seal the combustion chamber and protect the elastomer material from overheating. The non-metallic surface of the gasket is then impregnated with a silicon based agent to seal any pores and prevent the gasket from swelling when it comes in contact with liquids. Some designs may even incorporate seal elements made from a high temperature, chemical-resistant fluorocarbon based elastomer material called Viton.
MLS HEAD GASKET In 1970, Japanese gasket maker Ishikawa was issued the first patent for a revolutionary, new type of head-gasket, called the multi-layer steel or MLS head gasket. They effectively combine all of the benefits of previous gasket technologies into an extremely durable and adaptable component. The outer surfaces of the gasket are typically coated with a thin, fluorocarbon based Viton layer, in targeted areas, to aid in surface sealing.
OTHER GASKETS TECHNOLOGIES The elastomeric head-gasket is an example of a cost-reduction focused design. These gaskets use a single steel shim with a beaded coating of an elastomeric material such as silicone or Viton for fluid sealing. On the opposite end of the performance spectrum, are modern solid copper head-gaskets. These groves carry a stainless steel O-ring that when combined with a solid-copper head-gasket capable of sealing in some of the highest combustion pressures found within reciprocating engines.
-- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmindHow The Most Hated Auto Part Changed The WorldNew Mind2022-12-21 | ▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription
According to the Environmental Protection Agency’s estimates, in 2019, 7% of the light-duty vehicles in the United States did not comply with their mandated vehicle emission regulations. Even more astonishing, is the fact that one specific component on these vehicles accounts for about 68% of these compliance failures.
HISTORY OF SMOG
Though the catalytic converter has become the primary mechanism of the automobile industry for controlling exhaust emissions in internal combustion engines, its origin is a byproduct of industrialization as a whole. During the turn of the 20th century, the smog created in urban areas by factory smokestacks triggered the first concerns for air quality. As the automobile and the internal combustion engine became more abundant, their impact on air quality grew more worrisome., During the 1940s, In the United States the growing problem of urban smog, specifically in the Los Angeles area prompted the French mechanical engineer Eugene Houdry, to take interest in the problem. Houdry was an expert in catalytic oil refining and had developed techniques for catalytically refining heavy liquid tars into aviation gasoline.
WHAT IS SMOG
The exhaust of all internal combustion engines used on vehicles is composed primarily of three constituent gases, nitrogen, carbon dioxide, and water vapor. In lean operating modes of gasoline engines and in diesel engines, oxygen is also present. Diesel engines by design generally operate with excess air, which always results in exhausted oxygen, especially at low engine loads. The nitrogen and oxygen are primarily pass-throughs of atmospheric gases. While the carbon dioxide and water vapor are the direct products of the combustion process. Depending on the engine type and configuration, these harmless gases form 98-99% of an engine’s exhaust. However, the remaining 1-2% of combustion products comprise thousands of compounds, all of which to some degree, create air pollution.
The primary components of these pollutants, carbon monoxide, and nitrogen oxides, are formed within the highly reactive, high-temperature flame zone of the combustion cycle. While unburned and partially oxidized hydrocarbons tend to form near the cylinder walls where the combustion flame is quenched. Particulate matter, especially in diesel engines, is also produced in the form of soot. In addition to this, engine exhaust also contains partially burned lubricating oil, and ash from metallic additives in the lubricating oil and wear metals.
WHY CATALYTIC CONVERTERS
In 1970, the United States passed the Clean Air Act, which required all vehicles to cut its emissions by 75% in only five years and the removal of the antiknock agent, tetra-ethyl lead from most types of gasoline.
THE FIRST CONVERTER
Modern automotive catalytic converters are composed of a steel housing containing catalyst support called a substrate, that’s placed inline with an engine’s exhaust stream. Because the catalyst requires a temperature of over 450 degrees C to function, they’re generally placed as close to the engine as possible to promote rapid warm-up and heat retention.
On early catalytic converters, the catalyst media was made of pellets, placed in a packed bed. These early designs were restrictive, sounded terrible, and wore out easily. During the 1980s, this design was superseded by a cubic ceramic-based honeycomb monolithic substrate, coated in a catalyst. These new cores offered better flow and because of their much larger surface area, exposed more catalyst material to the exhaust stream. The ceramic substrate used is primarily made of a synthetic mineral known as cordierite.
TYPES OF CATS
The first generation of automotive catalytic converters worked only by oxidation. These were known as two-way converters as they could only perform two simultaneous reactions - the oxidation of carbon monoxide to carbon dioxide and the oxidation of hydrocarbons to carbon dioxide and water.
By 1981, "three-way" catalytic converters had superseded their two-way predecessor. Three-way catalytic converters induce chemical reactions that reduce nitrogen oxide to harmless nitrogen. This reaction can occur with either carbon monoxide, hydrogen, or hydrocarbons within the exhaust gas.
While three-way catalytic converters are more efficient at removing pollutants, their effectiveness is highly sensitive to the air-fuel mixture ratio. For gasoline combustion, this ratio is between 14.6 and 14.8 parts air to one part fuel. Furthermore, they need to oscillate between lean and rich mixtures within this band in order to keep both reduction and oxidation reactions running. Because of this requirement, computer-controlled closed-loop electronic fuel injection is required for their effective use.
-- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmindThe Incredible Technology Behind SandpaperNew Mind2022-12-03 | ▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription
Sandpaper belongs to a class of abrasive products known as coated abrasives. These products are composed of an abrasive element bonded to a backing material such as paper, fabrics, rubber, metals or resins and they generally possess some degree of flexibility. King Solomon is mentioned to have used a mysterious worm or an abrasive substance called the Shamir that had the power to cut through or disintegrate stone, iron and diamond. In the 13th century, Chinese craftsmen were known to bond sand, crushed shells and sharp seeds onto parchment with natural gum. Other notable natural substances that have been used as abrasive tools include Shark skin, Coelacanth scales, and boiled and dried rough horsetail plan.
INDUSTRIAL ERA After mastering the process, Oakley would go on to found John Oakey & Sons Limited in 1833 with the goal of mechanizing the process and within a decade Oakley had not only developed new adhesive techniques and manufacturing techniques that enabled the mass production of sandpaper but also created the first glass based coated abrasives. These products used small grains of ground-up glass or garnet called frit that are far more durable than sand, and also retain a sharp-edged structure as it wears down, producing a longer lasting abrasive cutting action. An initial attempt of producing their own grinding wheels was met with little success so the company, now branded as 3M soon transitioned into the coated abrasives industry. 3M’s initial venture into the market using natural abrasives was still plagued with quality issues and its reputation began to suffer.
Three-M-ite was a cloth backed coated abrasive that relied on a new class of synthetic abrasives. These abrasives were a direct result of the advent of electric furnace technology that allowed a combination of base materials to be fused by heating them to temperatures above 2000°C or 3600°F, forming new crystal structures with favorable abrasive properties .
NEW TYPES OF SANDPAPER In 1921, the company introduced the world’s first water-resistant coated abrasive called Wetordry. When bonded to a waterproof paper backing and used with water, silicon carbide sandpaper dramatically enhanced many of the key properties that define the effectiveness of a coated abrasive.
HOW SANDPAPER WORKS The effectiveness of this action is highly dependent on the shape of the abrasive grain, with sharper edges producing more localized pressure at the interface points of both materials. The durability of a sandpaper is primarily determined by the relative hardness between the abrasive and the work material, the adhesion properties and size of the abrasive grain or grit size, and its ability to resist loading, where ejected material is trapped between the grains.
A NEW AGE OF SYNTHETIC ABRASIVES Alumina-Zirconia is an incredibly tough and hard abrasive that offers nearly twice the performance of aluminum oxide in both efficiency and durability. It was also relatively easy to mass manufacture and quickly became a popular choice for metal working abrasive products.
SOL-GEL CERAMICS In the early 1980’s, a revolutionary process that would dramatically improve abrasive performance would be introduced by 3M with the industry's first steps into nanotechnology. This new class of ceramic nanoparticle abrasives are produced using a method called the sol-gel process. This new abrasive became the foundation of their new Cubitron product line, and it would soon gain wide acceptance in the metalworking industry both in coated product form and as bonded grinding tooling.
MICROREPLICATION In both synthetic and natural grain abrasives, the inconsistent particle shape of crushed grain creates inconsistent grinding and plowing action on the workpiece. These first trials in shape manipulation initially produced a coarsely shaped repeating pyramid mineral that was initially introduced in 1992 as a low grit metalworking aluminum oxide based product called 3M Trizact. By the turn of the century, 3M would introduce a new class of product line based on precision grain shape or PSG technology.
In this process a casting film is used to roll a microstructure onto a wet uncured abrasive gel coating. As this occurs a combination of UV light and heat is applied under the roller’s pressure, curing the abrasive in its designed structure. Microreplication would first be used to further refine the Trizact product line. Cubitron II utilized a unique standing ceramic aluminum oxide triangle microstructure that not only had an extremely sharp tip that would cut through the work material instead of plowing through it , but by design, would fracture to produce a new sharp edge as it wore, effectively becoming a self sharpening grain.
-- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmindThe Story Of Fuel InjectionNew Mind2022-11-23 | This is the story of how fuel injection transformed from it's simple beginnings as a mechanism to burn fuel oil to the complex computer driven integrated fuel management systems found on today's vehicles.The Truth About Self Driving CarsNew Mind2022-11-02 | ▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription
Almost a decade ago, a sizable list of tech companies, collectively wielding over $100 billion in investment, asserted that within five years the once-unimaginable dream of fully self-driving cars would become a normal part of everyday life. These promises, of course, have not come to fruition. Despite this abundance of funding, research and development, expectations are beginning to shift as the dream of fully autonomous cars is proving to be far more complex and difficult to realize than automakers had anticipated.
THE LAYERS OF SELF DRIVING Much like how humans drive a vehicle, autonomous vehicles operate using a layered approach to information processing. The first layer uses a combination of multiple satellite based systems, vehicle speed sensors, inertial navigation sensors and even terrestrial signals such as cellular triangulation and Differential GPS, summing the movement vector of the vehicle as it traverses from its start waypoint to its destination. The next layer is characterized by the process of detecting and mapping the environment around the vehicle both for the purposes of traversing a navigation path and obstacle avoidance. At present, the primary mechanisms of environment perception are laser navigation, radar navigation and visual navigation.
LIDAR In laser navigation, a LIDAR system launches a continuous laser beam or pulse to the target, and a reflected signal is received at the transmitter. By measuring the reflection time, signal strength and frequency shift of the reflected signal, spatial cloud data of the target point is generated. Since the 1980s, early computer based experiments with autonomous vehicles relied on LIDAR technology and even today it is used as the primary sensor for many experimental vehicles. These systems can be categorized as either single line, multi-line and omnidirectional.
RADAR The long-range radars used by autonomous vehicles tend to be millimeter wave systems that can provide centimeter accuracy in position and movement determination. These systems, known as Frequency modulated continuous wave RADAR or FMCW, continuously radiate a modulated wave and use changes in phase or frequency of the reflected signal to determine distance.
VISUAL PERCEPTION Visual perception systems attempt to mimic how humans drive by identifying objects, predicting motion, and determining their effect on the immediate path a vehicle must take. Many within the industry, including the visual-only movement leader Tesla, believe that a camera centric approach, when combined with enough data and computing power, can push artificial intelligence systems to do things that were previously thought to be impossible.
AI At the heart of the most successful visual perception systems is the convolutional neural network or CNN. Their ability to classify objects and patterns within the environment make them an incredibly powerful tool. As this system is exposed to real world driving imagery, either through collected footage or from test vehicles, more data is collected and the cycle of human labeling of the new data and training the CNN is repeated. This allows them to both gauge distance and infer the motion of objects as well as the expected path of other vehicles based on the driving environment.
At the current state of technology, the fatal flaw of autonomous vehicle advancement has been the pipeline by which they’re trained. A typical autonomous vehicle has multiple cameras, with each capturing tens of images per second. The sheer scale of this data, that now requires human intervention and the appropriate retraining now becomes a pinch point of the overall training process.
DANGERS Even within the realm of human monitored driver assistance, in 2022 over 400 crashes in the previous 11 months involving automated technology have been reported to the National Highway Traffic Safety Administration. Several noteworthy fatalities have even occurred with detection and decision making systems being identified as a contributing factor.
COUNTERPOINT While the argument could be made that human error statistically causes far more accidents over autonomous vehicles, including the majority of driver assisted accidents, when autonomous systems do fail, they tend to do so in a manner that would otherwise be manageable by a human driver. Despite autonomous vehicles having the ability to react and make decisions faster than a human, the environmental perception foundation these decisions are based on are so distant from the capabilities of the average human that trust in them still lingers below the majority of the public.
The Tunguska event, it was believed to be caused by the air burst of an asteroid or comet about 50–60 meters or 160–200 ft in size, at an altitude of 5–10 kilometers or about 3–6 miles. It was estimated that the asteroid had a kinetic energy of around 15 megatons, or the equivalent to that of an explosion of 1,000 Hiroshima-type atomic bombs. It’s even been estimated that the explosion had caused a deceleration of the Earth's rotation relative to the rest of the Solar System by 4 microseconds. This explosion, caused by a near-earth asteroid about 20 meters or 66 ft in size, was estimated to have released the energy equivalent of around 500 kilotons of TNT.
The Chelyabinsk event is a reminder of the destructive power of even small asteroids, and highlights both the frequency of these events as well as the importance of the need to to identify and track these potential threats.
PHO PHO’s are defined as near earth objects, such as an asteroid or a comet, that have an orbit which approaches the earth at a distance of 0.05 astronomical units or 19.5 lunar distances, or less. 85% these asteroids are known as Apollo asteroids, as they hold an orbit that keeps within the inner solar system.
DETECTING POTENTIAL THREAT OBJECTS They scan the sky slowly, on the order of once a month but produce deeper, more highly resolved data. Warning surveys, in contrast, utilize smaller telescopes to rapidly scan the sky for smaller asteroids that are within several million kilometers from earth. These dedicated survey installations first started to appear around the late 1990s and were initially clustered together in a relatively small part of the Northern Hemisphere. Initiated in 2015, This robotic astronomical survey and early warning system located in the Hawaiian islands is optimized for detecting smaller near-Earth objects a few weeks to days before they impact Earth.
Further NASA funding had brought the system to the Southern hemisphere with two additional telescopes becoming operational in early 2022 in South Africa. At present, several other southern hemisphere based surveys are also under construction. In addition to ground based surveys, the Wide-field Infrared Survey Explorer or WISE infrared telescope, in earth’s orbit, was tasked with a 4 month extension mission called NEOWISE, to search for near-earth objects using its remaining capabilities. While this initial extension occurred in 2010, NASA had reactivated the mission in 2013 for a new three-year mission to search for asteroids that could collide with Earth, and by July 2021, NASA would reactivate NEOWISE once again, with another PHO detection mission extending until June of 2023.
Currently, a replacement space-based infrared telescope survey system called the NEO Surveyor is under development with an expected deployment in 2026.
DART MISSION DART was launched on November, 24 2021 on a dedicated Falcon 9 mission. The mission payload along with Falcon 9's second stage was placed directly on an Earth escape trajectory and into heliocentric orbit when the second stage reentered for a second escape burn. Despite DART carrying enough xenon fuel for its Ion thruster, Falcon 9 did almost all of the work, leaving the spacecraft to perform only a few trajectory-correction burns with simple chemical thrusters for most of the journey. On 27 July 2022, the DRACO camera detected the Didymos system from approximately 32 million km or 20 million mi away and began to refine its trajectory.
These captured images were transmitted in real time to earth using the RLSA communication system. A few minutes before impact DART performed final trajectory corrections. This ultimately changes the overall orbit of the asteroid system. An asteroid on a hypothetical collision course with earth would only require a path shift of 6,500 km to avoid the earth, a tiny amount relative to 10s the millions of kilometers it would travel orbiting the sun.
LICACUBE Built to carry out observational analysis of the Didymos asteroid binary system after DART's impact, it was the first deep space mission to be developed and autonomously managed by an Italian team.
HERA - FOLLOW UP In October 2024, the ESA will launch the Hera mission with its primary objective being the validation of the kinetic impact method to deviate a near-Earth asteroid in a colliding trajectory with Earth. Hera will fully characterize the composition and physical properties of the binary asteroid system including the sub-surface and internal structures. Hera is expected to arrive at the Didymos system in 2026.
On Sep 23th, 2019, a new world record for 0-400-0 km/h was set at Råda airfield in Sweden by the Koenigsegg Regera . During this attempt, the Regera averaged around 1.1MW of dissipation during the braking phase, the system dissipated enough energy to power the average American home for just under 2 hrs. On almost every powered wheeled vehicle the brake system produces more deceleration force than the drivetrain’s acceleration force.
ORIGINS The first wheeled vehicle brake systems consisted simply of a block of wood and a lever mechanism. To stop a vehicle, the lever was pulled, forcing the block of wood to grind against the steel rim of the wheel. Wooden brakes were commonly used on horse-drawn carriages and would even be used on early steam-powered cars that were effectively steam powered carriages.
DRUM BRAKES The first brake system specifically designed for cars with pneumatic tires would be developed from an idea first devised by Gottlieb Daimler. Daimler’s system worked by wrapping a cable around a drum coupled to a car’s wheel. As the cable was tightened, the wheel would be slowed down by friction. While it was far more responsive than a wooden block, the exposed friction material of the external design made it less effective when exposed to the elements.
This idea evolved into the drum brake with a fixed plate and two friction shoes. These early systems used a mechanical cam that, when rotated, would apply a force through the web to the lining table and its friction material. On drum brakes, the shoe located towards the front of the vehicle is known as the primary shoe while the rearward one is designated the secondary shoe.
MASTER CYLINDER At the drunk brake, a hydraulic cylinder containing two pistons replaces the cam mechanism, applying a force outwards on the brake shoes as pressure builds within the system. In hydraulic brake systems, a combination of rigid hydraulic lines made from either steel or a nickel-copper alloy and flexible reinforced rubber hoses are used to transfer fluid pressure between the master cylinder and the brake cylinders. Hydraulics also increased safety, through redundancy by allowing the brake system to be split into two independent circuits using tandem master cylinders. Four wheel-hydraulic brakes would first appear on a production car with the 1921 Duesenberg Model A though Rickenbacker would be the first manufacturer to offer them on vehicles that were mid-priced and more mass-appealing, in 1922. Shortly thereafter, other manufacturers would adopt hydraulic brakes and it quickly became the industry standard.
VACUUM BOOSTER Many of these ideas involved using compressors to pressurize either air or hydraulic fluid and in order reduce the force needed by an operator to actuate a vehicle's brakes. First introduced by the Pierce-Arrow’s motor car company in 1928, this system, originally designed for aviation, uses the vacuum generated by an engine’s air aspiration to build a vacuum within a device known as a brake vacuum servo. By the 1930s, vacuum-assisted drum brakes began to grow in popularity.
DISC BRAKES The next leap in braking technology got its start in England in the late 1890s with the development of a disc-type braking system by the Lanchester Motor Company. This system used a cable operated clamping device called a caliper that would grab a thin copper disc that was coupled to the wheel, in order to slow its rotation. By 1955, Citroën would introduce the Citroen DS, the first true mass-production car to field disc brakes. For the vast majority of modern disc-brakes systems, the disc or rotor is made from gray cast iron.
ABS These systems attempt to modulate brake pressure to find the optimal amount of braking force the tires can dynamically handle, just as they begin to slip. In most situations, maximum braking force occurs when there is around 10–20% slippage between the braked tire’s rotational speed and its contact surface. By the early 1950’s the first widely used anti-skid braking system, called Maxaret, would be introduced by Dunlop.
It would take the integration of electronics into braking to make the concept viable for cars. As the wheel’s rotation starts to accelerate as it transitions out of braking the controller rapidly increases hydraulic pressure to the wheel once again until it sees the deceleration again.
COMPOSITES Around the early 2000’s a derivative material known as carbon fiber-reinforced silicon carbide would start appearing in high end sports cars. Called carbon-ceramic brakes, they carry over most of the properties of carbon-carbon brakes while being both more dense and durable and they possess the key property of being effective even at the lower temperature of road car use.
INTRO In modern digital computers, these instructions resolve down to the manipulation of information represented by distinct binary states. These bits may be abstractly represented by various physical phenomena, such as by mechanical, optical, magnetic, or electric methods and the process by which this binary information is manipulated is also similarly versatile, with semiconductors being the most prolific medium for these machines. Fundamentally, a binary computer moves individual bits of data through a handful of logic gate types.
LIMITATIONS OF ALGORITHMS In digital computing, binary information moves through a processing machine in discrete steps of time. This is known as an algorithm’s complexity. An example of such an algorithm would be one that determines if a number is odd or even. These are known as linear time algorithms and they execute at a rate that is directly correlated to the size of the algorithm’s input.
This characteristic becomes obvious within a basic addition algorithm. Because the number of steps, and inherently the execution time is directly determined by the size of the number inputs, the algorithm scales linearly in time. Constant and linear time algorithms generally scale to practical execution times in common use cases, however, one category of algorithm in particular suffers from the characteristic of quickly becoming impractical as it grows. These are known as an exponential time algorithm and they pose a huge problem for traditional computers as the execution time can quickly grow to an impractical level as input size increases.
QUBIT Much like how digital systems use bits to express their fundamental unit of information, quantum computers use an analog called a qubit. Quantum computing by contrast, is probabilistic. It is the manipulation of these probabilities as they move between qubits that form the basis quantum computing. Qubits are physically represented by quantum phenomena.
HOW QUANTUM PROCESSING WORKS A qubit possesses an inherent phase component, and with this characteristic of a wave, a qubit’s phase can interfere either constructively or destructively to modify its probability magnitudes within an interaction.
BLOCH SPHERE A Bloch sphere visualizes a qubit’s magnitude and phase using a vector within a sphere. In this representation, the two, classical bit states are located at the top and bottom poles where the probabilities become a certainty, while the remaining surface represent probabilistic quantum states, with the equator being a pure qubit state where either classical bit state is possible. When a measurement is made on a qubit, it decoheres to one of the polar definitive state levels based on its probability magnitude.
PAULI GATES Pauli gates rotate the vector that represents qubit’s probability magnitude and phase, 180 degrees around the respective x, y and z axes of its Bloch sphere. For the X and Y gate, this effectively inverts the probability magnitude of the qubit while the Z gate only inverts its phase component.
HADAMARD GATES Some quantum gates have no classic digital analogs. The Hadamard gate, or H gate is one of the most important unary quantum gates, and it exhibits this quantum uniqueness. Take a qubit at state level 1 for example. If a measurement is made in between two H gates, the collapsing of the first H gate’s superposition would destroy this information, making the second H gate’s effect only applicable to the collapsed state of the measurement.
OTHER UNARY GATES In addition to the Pauli gates and the Hadamard Gate, two other fundamental gates known as the S gate and T gate are common to most quantum computing models.
CONTROL GATES Control gates trigger a correlated change to a target qubit when a state condition of the control qubit is met. A CNOT gate causes a state flip of the target qubit, much like a digital NOT gate, when the control qubit is at state level of 1. Because the control qubit is placed in a superposition by the H gate, the correlation created by entanglement through the CNOT gate, also places the target qubit into a superposition.
When the control or target qubit state is collapsed by measurement the other qubits' state is always guaranteed to be correlated by the CNOT operation. CNOT gates are used to create other composite control gates such as the CCNOT gate or Toffoli gate which requires two control qubits at a 1 state to invert the target qubit, the SWAP gate which swaps two qubit states, and the CZ gates which performs a phase flip. When combined with the fact that a qubit is continuous by nature and has infinite states, this quickly scales up to a magnitude of information processing that rapidly surpasses traditional computing.
SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmindThe Plot To Eliminate Cold War ScientistsNew Mind2022-06-30 | ▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription
During the 1980s, amid the peak of the cold war’s technology race, a series of peculiar deaths among scientists working in Britain's defense industry began to baffle investigators. Most of the victims were research staff of Marconi Electronic Systems, with the majority being computer scientists working on defense projects associated with US Strategic Defense Initiative research and development. Furthermore, many of these deaths were under bizarre circumstances with their underlying causes ruled as undeterminable.
While the Marconi deaths grabbed the headlines, it was also accompanied by other suspicious deaths throughout the defense industry of Europe. In 1986, several West German scientists working on projects tied to the Strategic Defense Initiative were also found dead under mysterious circumstances. All of which had been involved either directly or peripherally in the Strategic Defense initiative program and its related projects. Among them, UK’s Computer Weekly correspondent Tony Collins, would file a series of noteworthy stories investigating the deaths.
In 1990 Collins would go on to publish his book, ‘Open Verdict’, chronicling the series of deaths and the suspicious anecdotal evidence that tied them together. However, despite the overwhelming evidence of a clandestine plot to hinder the UK’s defense industry, no firm conclusions as to its true nature was ever uncovered.
VARNISH At the dawn of the automotive industry, early motor-vehicles were painted in a manner similar to both wooden furniture and horse-drawn carriages of the time. A varnish-like product was brushed onto the vehicle’s surfaces and subsequently sanded and smoothed. After multiple layers of varnish were established, the vehicle was then polished. Varnishes are generally composed of a combination of a drying oil, a resin, and a solvent.
LACQUERS The first true automotive specific coatings would emerge in the early 1920s as a result of an accidental discovery. This liquid became the basis for nitrocellulose lacquer, a product that would become a popular staple of the automotive finishing industry for decades to come. Nitrocellulose was the first man-made plastic and it was created in 1862 by Alexander Parkes.
Dupont chemist, Edmund Flaherty, would go on to refined the use of nitrocellulose dissolved in a solvent, to create a system that used a combination of naphtha, xylene, toluene, acetone, various ketones, and plasticizing materials that enhance durability and flexibility, to create a fast drying liquid that could be sprayed. Nitrocellulose lacquer has the advantage of being extremely fast drying, and it produces a tougher and more scratch resistant finish.
ENAMELS By the 1930s, the development of alkyd enamel coatings would offer a significant enhancement over the properties of existing lacquers. This reaction occurs between the fatty-acids of the oil-portion of the resin and oxygen from the surrounding air, creating a durable film as the solvent evaporates.
ACRYLICS In the 1950s, a new acrylic binder technology would be introduced that would transform the automotive coatings industry. Acrylic paints are based on polyacrylate resins. These synthetic resins are produced by the polymerization of acrylic esters or acrylates, forming a durable plastic film. Like previous systems, the acrylates are dissolved within a hydrocarbon solvent and applied using spraying.
However, unlike alkyd, acrylate polymerization occurs without surrounding oxygen, and in most production acrylic systems, is initiated with a catalyst based on isocyanates or melamines. Polyacrylate resins do not easily absorb radiation in the solar spectrum, giving them excellent resistance to UV degradation, when compared to previous resins.
UNDERCOATS Since the inception of its use, most of these undercoats or primers were composed of a combination of alkyd and oleaginous resins to produce an interface coating. Initially these coatings were applied to individual panels through dip coating, though this would eventually evolve to a combination of dipping and spraying entire body assemblies. Because undercoats directly interface to the vehicle's base metal, they serve as the primary form of corrosion protection.
However, the process by which they were applied resulted in inconsistent coverage throughout the vehicles. This was due to recesses and enclosed areas on the vehicle’s body. In the 1960s, Ford Motor Company would pioneer a dramatically different approach to vehicle priming through electrodeposition. The car body is coated on the production line by immersing the body in a tank containing the aqueous primer dispersion and subjecting it to a direct current charge.
EPA By the end of the 1970s, the EPA had sought to reduce photochemically reactive hydrocarbon solvent discharges from industrial finish operations by introducing emission requirements that restricted finishes to be sprayed at a minimum volume solids content of 60%.
CLEAR COAT This initiative led to a new approach to how automotive finishes were utilized, with specific functions of an automotive coating now being directly engineered into each layer. In the Late 1970s, the first wet-on-wet systems were developed that consisted of a thin base coat and a thicker clear coat. This separation of coating function now allowed for completely different chemistries to be employed between layers. Based on solvents composed of glycol ethers and water, these systems dramatically reduced hydrocarbon emissions and were generally high solid in nature, easily meeting EPA requirements .
POLYURETHANES Modern automotive coatings overcome these limitations by using a hybrid dispersion of acrylics, polyurethane and even polyesters. These systems, known as acrylic-polyurethane enamels, incorporate the monomers of each resin in a proprietary combination that, once initiated by a catalyst, undergo polymerization. By adjusting the constituent resins and their quantities as well as the catalyst formulation, the sequence and rate of how this polymer network is formed can be modified, and the properties of the composite film adjusted to suit the needs of the product.
The GIF format was introduced in 1987 by CompuServe, America's first major commercial online service provider. In early 1980s, advances in modem speed, processing power and the introduction of their CompuServe B file transfer protocol had now allowed for the exchange of graphics on the platform. This also opened the door to CompuServe's eventual transition to a GUI based interface.
At the time, access to most online information services were billed by time and for graphics to be exchanged cost effectively and within a practical transfer time, the service required a method to reduce the memory requirements of informationally dense graphical data.
PALETTES Because of this the concept of a palette or a color lookup table was introduced. A palette is a data structure that contains an indexable lookup table of chromaticity coordinates. In this mechanism, each pixel of the image data is defined by a palette table index, where its color data can be found. A 2-bit per pixel image for example can reference 4 color definitions within a palette while a 16 bit per pixel image can reference a little over 65k unique color definitions.
IMAGE COMPRESSION This technique is known as lossy compression, as it alters the original image data in the compression process. While lossy compression can dramatically reduce memory requirements, the technique was far too processor intensive for consumer computer hardware of the time and its losseness made it unusable for functional graphics, such as in case of graphic user interfaces. Lossless image compression that did not change the image data was chosen for the JIF format as the available techniques were relatively simple and could operate easily on existing hardware. It also best matched the intended application of sharp-edged line art based graphics that used a limited number of colors, such as logos and GUI elements.
RLE This allowed long runs of similar pixels to be compressed into one data element and it proved to be most efficient on image data that contained simple graphics such as with icons and line drawings. Because of the overhead of the counting mechanism, more memory is required than the original image in such cases, making the technique unusable for more complex images.
LZW Wilhite had concluded that run length encoding was not an effective solution and looked towards a new class of data-compression algorithms, developed in the late 1970s by Jacob Ziv and Abraham Lempel. A key characteristic of LZW is that the dictionary is neither stored or transmitted but rather developed within the algorithm as the source data is encoded or compressed data decoded.
ENCODING In the encoding process an initial code-width size is established for the encoded data. An 8-Bit based data source for example would require the first 256 dictionary indexes to be mapped to each possible 8-bit word value.
From here, if more data is available in the source data stream the algorithm returns to its loop point. If there is no data left to encode, the contents of the remaining index buffer is found within the dictionary and its index code-word sent to the output code stream, completing the final encoding.
DECODING A dictionary table matched to the bit-width specification of the encoded data is first initialized in a manner similar to the encoding process. Because the encoding process always starts with a single value, the first code-word read from the input code stream always references a single value within the dictionary, which is subsequently sent to the decoded output data stream. If one is found or the code-word represents a single value, the referenced values are sent to the decoded output data stream.
VARIABLE BIT-DEPTH This requires the bit-width of a code-word to be, at minimum one bit larger than that of the image data. This is accomplished by starting the encoding process with the assumption that the bit-width of a code-word will be one bit larger than the image data’s bit-depth. This is the minimum needed to index every possible value of the image data plus the control codes.
IMAGE LAYOUT Each image is contained within a segment block that defines its size, location on the canvas, an optional local color palette and the LZW encoded image data along with its starting code-word bit-width size. Each image can either use its own local color palette or the global color palette, allowing for the use of far more simultaneous colors than a single 8-bit palette would allow. The LZW encoded data within the image block are packaged into linked data sub-blocks of 256 bytes or less.
GRAPHIC CONTROL EXTENSION The graphic control extension also took advantage of the format's ability to store multiple images with the introduction of basic animation.
While the mass use of strategic nuclear weapons is the ultimate terror of modern warfare, it represents the final stage of conflict escalation on the world stage. A more immediate threat comes from tactical nuclear weapons. Tactical nuclear weapons are generally considered low yield, short-ranged weapons designed for the use at the theater-level, alongside conventional forces. Both the US and Russia define a tactical nuclear weapon by it’s operational range.
Their battlefield centric missions and perception as being less destructive encourage their forward-basing and can make the decision to use tactical nuclear weapons psychologically and operationally easier, potentially pushing a conflict into the realm of strategic nuclear escalation. Surveillance systems designed to detect these detonations must home in on the telling characteristics of a nuclear weapon.
BHANGMETER As the very first nuclear weapon detonated, it was observed by both cameras and other optical instrumentation, that a peculiar double-peaked illumination curve of light was emitted from the bomb. It was soon determined from analyzing the fireball expansion phenomenon of the detonation, that two -millisecond range peaks of light were separated by a period of minimum intensity lasting from a fraction of a second to a few seconds, that corresponded to an atmospheric shockwave break away from the expanding front of the fireball. It took for the shockwave front to transition from opaque to transparent was directly correlated to the weapons yield.
FIRST METERS In 1948, during the third series of American nuclear testing, called Operation Sandstone, the first purpose-built proof-of-concept device for specifically detecting nuclear detonations would be tested. While this device was simple and devised on site, it provided a measurement of light intensity over time using a photocell coupled to a cheap oscilloscope. During a meeting with the project group, Reines would coin the term Bhangmeter for the device.
A calibration curve was developed from the average of these measurement devices and the testing weapon’s yield. From this data, the bhangmeter was able to optically determine a nuclear weapon’s yield to within 15%. Though blue light was used to produce this initial calibration data due to its higher contrast within the detonation, it was soon discovered that changing the observed spectrum of visible light also modified the amount of time it took for the light intently to start its initial drop off. During further tests it was also realized that the altitude of a bomb’s detonation could also be determined from analyzing the time-to-minimum light intensity as the duration of the initial fireball expansion was largely influenced by the effect the ground had on its shape.
ADOPTION These aviation compatible, AC powered systems were specifically designed and deployed to monitor the Soviet test of Tsar Bomba, the largest nuclear weapon ever detonated. Around the same time, the first large scale nuclear detonation network would be deployed by the US and the UK. Linked by Western Union’s telegraph and telephone lines, the system was designed to report the confirmation of a nuclear double-flash before the sensors were destroyed by the detonation. The Bomb Alarm Display System was in use from 1961 to 1967 and while it offered adequate surveillance for the onset of nuclear war, the emergence of the Partial Nuclear Test Ban Treaty in 1963 now warranted the ability to monitor atmospheric nuclear testing at the global level.
SATELITES The solution to the challenge of this new scope of nuclear detection came with Project Vela, a group of satellites developed specifically for monitoring test ban compliance. They could determine the location of a nuclear explosion to within about 3,000 miles, exceeding the positional and yield accuracy of the original system.
GPS As the Vela program was being phased out in the mid 1980s, the task of specifically detecting nuclear detonations would become a part of the new global position system. Known as the GPS Nuclear Detonation Detection System, this capability took advantage of the extensive coverage of earth's surface offered by the constellation.
These bursts propagate from a nuclear detonation in a spherical shell and by measuring their intensity against the accurate timing information of 4 or more satellites of the GPS constellation, these time differences of arrival can be used to calculate the position of the x-ray burst source. Each of the GPS satellites are equipped with a specialized antenna and support system to both detect and measure these EMP incidences. The Bhangmeters that complement the other sensors on the GPS constellation are the most sophisticated satellite based system to date.
SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmindThe Science Of CardboardNew Mind2022-03-03 | ▶ Visit brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription
In 2020, the United States hit a record high in its yearly use of one the most ubiquitous manufactured materials on earth, cardboard. As of 2020, just under 97% of all expended corrugated packaging is recovered for recycling, making this inexpensive, durable, material an extraordinary recycling success story.
THE RISE OF PAPER PACKAGING This processed pulp is then used to produce paper. Paper making machines use a moving woven mesh to create a continuous paper web that aligns the fibers held in the pulp, producing a continuously moving wet mat of fiber. The invention of several paper-based packaging forms and processes stemmed from this boom, with the corrugated fiberboard shipping container quickly becoming the most dominant.
INVENTION OF CORRUGATION The first known widespread use of corrugated paper was in the 1850s with an English patent being issued in 1856 to Edward Charles Healey and Edward Ellis Allen. Three years later, Oliver Long would patent an improvement on Jone’s design with the addition of an adhered single paper facing to prevent the unfolding of the corrugation, forming the basis for modern corrugated fiberboard. American Robert Gair, a Brooklyn printer and paper-bag maker, had discovered that by cutting and creasing cardboard in one operation he could make prefabricated cartons.
In a partnership with the Thompson and Norris company, the concept would be applied to double-faced corrugated stock, giving rise to the production of the first corrugated fiberboard boxes. In 1903, the first use of corrugated fiberboard boxes for rail transport occurred when the Kellog brothers secured an exception to the wooden box requirement by railroads of the Central Freight Association.
HOW ITS MADE Rolls of paper stock are first mounted onto unwinding stands and are pulled into the machine at the feeding side of the corrugator, also known as the "wet end". The paper medium is heated to around 176-193 degrees C , so it can be formed into a fluted pattern at the corrugating rolls. The corrugating rolls are gear-like cylinders that are designed to shape the paper medium into a fluted structure as it moves through them. As the newly formed fluted paper leaves these rolls, an adhesive is applied to the flute tips and the first liner is roller pressed on.
The paper stock that forms this liner is often pre-treated with steam and heat before this binding process. The adhesives used in modern corrugated fiberboard are typically water-based, food-grade, corn starches combined with additives. A second liner is applied by adding adhesive to the fluted tips on the other side of the paper medium. After curing, the sheets may be coated with molten wax to create a water-resistant barrier if the packaging is expected to be exposed to excessive amounts of moisture, such as with produce or frozen food products.
PAPER SOURCE While the first packaging papers relied on the chemical based Kraft pulping process, modern production relies primarily on mechanical pulping, due to its lower cost and higher yield. When a production run of corrugated fiberboard is done, a target set of specifications based on customer requirements, determine both the quality control and physical properties of the fiberboard.
BOXES Corrugated sheets are run through a splitter-scoring machine that scores and trims the corrugated stock into sheets known as box blanks. Within the flexographic machine, the final packaging product is created. Flexographic machines employ both printing dies and rotary die-cutters on a flexible sheet that are fitted to large rollers. Additionally, a machine known as a curtain coater is also utilized to apply a coat of wax for moisture-resistant packaging.
RECYCLING The slurry is sent through an industrial magnet to remove metal contaminants. Chemicals are also applied to decolorize the mixture of inks within the slurry. Because the paper produced by purely recycled material will have a dull finish and poor wear characteristics, virgin pulp is typically blended into the slurry to improve its quality. This blended pulp is then directly used to produce new paper.
Recycling paper based packaging is so effective that only 75% of the energy used to produce virgin paper packaging is needed to make new cardboard from recycled stock. Aside from diverting waste material from landfills, it requires both 50% less electricity and 90% less water to produce.
KEY FOOTAGE Georgia-Pacific Corrugated Boxes: How It’s Made Step By Step Process | Georgia-Pacific https://youtu.be/C5nNUPNvWAw
SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmindPortable Nuclear PowerNew Mind2021-11-22 | ▶ Visit brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription
NUCLEAR POWER Of all the power sources available to man, none has been as extraordinary in energy yield as nuclear fission. In fact, a single gram of fissile nuclear fuel, in theory, contains as much free energy as the gasoline contained within a small fuel tanker truck. By the early 2000s, concerns over carbon dioxide emissions would bring about a renewed interest in nuclear power. And with this, came a myriad of developments that aimed at improving the safety and sustainability of large scale reactors. However, in recent years, a new paradigm in how nuclear fission reactors are created and utilized is starting to gain momentum.
NUCLEAR FISSION To date, almost all nuclear power reactors extract energy from the process of nuclear fission. In this process, a fissile nuclear fuel is bombarded with neutrons. As the nucleus of the fuel’s atoms captures a neutron by the strong nuclear force, it begins to deform resulting in the nucleus fragments exceeding the distances at which the strong nuclear force can hold the two groups of charged nucleons together. This tug of war between the strong nuclear force and the electromagnetic force ends with the two fragments separating by their repulsive charge.
Because fission reactions are primarily driven by bombardment, establishing and regulating a sustained fission chain reaction becomes feasible through controlling the free neutron movement within a reactor. This characteristic allows for fission reactions to be "throttled", making it well suited for electric power generation.
FIRST REACTORS The first practical nuclear reactor was developed during the early 1950s by the U. Known as the S1W reactor, it would see its first deployment on the USS Nautilus in January 1954. The S1W was a relatively simple and compact design known as a pressurized water reactor. The fission chain reaction can also be throttled by introducing neutron absorbers into the reactor core.
IMPROVEMENTS ON REACTOR DESIGN Within a decade, the two circuit designs of pressurized water reactors would be reduced to a single loop configuration with the introduction of boiling water reactors. Designed primarily with civilian power generation in mind, a boiling water reactor directly produces steam by heating cooling water with the reactor core. This steam is then directly used to drive a turbine, after which it is cooled in a condenser and converted back to liquid water, and pumped back into the reactor core. Boiling water reactors still utilized water as the neutron moderator and chain reaction throttling via control rods or blades was also retained.
GAS REACTORS In gas cooled reactors, an inert gas is used to transfer heat from the reactor core to a heat exchanger, where steam is generated and sent to turbines. Neutron moderation is accomplished by encasing the nuclear fuel in either graphite or heavy water. The effectiveness of how they moderate neutrons also permits the use of less-enriched uranium, with some reactors being able operate purely on natural uranium.
PEBBLE-BED REACTORS These thin, solid layers are are composed of a 10 micron porous inner carbon layer that contains the fission reaction products, a neutron moderating, and protective 40 micron pyrolytic carbon inner-layer, a 35 micron silicon carbide ceramic layer to contain high temperature fission products and add structure to the particle, and another protective 40 micron pyrolytic carbon outer later. TRISO fuel is incredibly robust and resilient. They can survive extreme thermal swings without cracking as well as the high pressures and temperatures of fission cooling systems.
Gas cooled reactors work especially well with TRISO fuel because of their ability to operate at high temperatures while remaining chemically inert. When combined with TRISO fuel, they also offer incomparable levels of nuclear containment.
SMRs SMRs are nuclear reactors of relatively small power generation capacity, generally no larger than 300 MW. They can be installed in multi-reactor banks to increase plant capacity and they offer the benefit of lower investment costs and increased safety through containment.
PROJECT PEELE
Called Project Peele, the program is planned around a two year design-maturation period where a generation IV reactor design will be adapted to small scale, mobile use. X-Energy, in particular, has promoted TRISO pebble bed technology as the ideal choice for such a rugged reactor design.
In addition, the full-scale deployment of Fourth Generation nuclear reactor technologies will have significant geopolitical implications for the United States while reducing the Nation’s carbon emissions.
SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannelThe Modem: Building The Internet With SoundNew Mind2021-10-12 | ▶ Visit brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription
THE INTERNET ARPANET was initially created to facilitate communications among government agencies and research institutions. The civilian ARPANET would eventually be migrated to a more modernized parallel network called NSFNET. Around this time, the restrictions on the commercial use of NSFNET would be lifted and with it came the emergence of the commercial internet service provider industry.
This shift to commercialization became the catalyst for a massive influx of money, technical advancement, and the proliferation of access that transitioned the early internet from the military’s technological marvel to the massive communications apparatus that infiltrates every aspect of our lives today.
BAUD RATE The baud unit’s definition would be revised and redefined formally in 1926, to represent the number of distinct symbol changes made to a transmission medium per second.
THE FIRST MODEMS Derived from the term modulator-demodulator, a modem converts digital data into a signal that is suitable for a transmission medium. A year later, a commercial variant of the SAGE modem would be introduced to the public as the Bell 101 Dataset.
FSK In 1962, the underlying technology of the modem would split from that of teleprinters with the introduction of the Bell 103 dataset by AT&T. Because the Bell 103 was now fully electronic, a new modulation method was introduced that was based on audio frequency-shift keying to encode data. In frequency shift keying a specific transmitted frequency is used to denote a binary state of the transmission medium.
By the mid 1970s, the baud rate of frequency shift keying modems would be pushed even higher with the introduction of 600 baud modems that could operate at 1200baud when used in one directional communication, or half-duplex mode.
HAYES SMARTMODEM The Smartmodem introduced a command language which allowed the computer to make control requests that included telephony commands, over the same interface used for the data connection.
The mechanics allowed the modem to switch between command mode and data mode by transmitting an escape sequence of 3 plus symbols. From this, the Hayes smart modem quickly grew in popularity during the mid 1980s, inherently making the command set used by it, the Hayes command set, the de facto standard of modem control.
QAM As the early 1980s progressed, manufacturers started to push their modem speeds past 1200 bps. In 1984, a new form of modulation called quadrature amplitude modulation would be introduced to the market. Quadrature amplitude modulation is an extension of phase shift keying that adds additional symbol encoding density per baud unit, by overlapping amplitude levels with phase states. The first modem standard to implement quadrature amplitude modulation was ITU V. 22bis employed a variation of the modulation, known as 16-QAM to encode 16 different symbols, or 4 bits of data within each baud unit, using a combination of 3 amplitude levels and 12 phases.
TRELLIS Trellis code modulation differs dramatically from previous modulation techniques, in that it does not transmit data directly. A state machine based algorithm is then used to encode data into a stream of possible transitions between branches of the partition set. This transition data is used to recreate all possible branch transitions in a topology that is similar to a trellis. From this, using a predetermined rule for path selection, the most likely branch transition path is chosen and used to recreate the transmitted data.
HIGH SPEED MODEMS By 1994, baud rates would be increased to 3,429 symbols per second with up to 10 bits per symbol encoding now becoming possible. The dramatic boost in data rates created by TCM directly changed the look and feel of the growing internet.
56K
In early 1997, the modem would get one last boost in bitrate with the introduction of the first 56k dial-up modems. Pushing speeds above 33.6kps proved to be extraordinarily challenging as that process that digitized telephone audios signals for routing by telecommunications infrastructure made it very difficult for denser data transmissions to survive the digitizing process. This difficulty led modem manufacturers to abandon pushing analog-end bitrate speeds higher. Initially there were two competing standards for 56k technology, US Robotics' X2 modem and the K56Flex modem developed by Lucent Technologies and Motorola.
Both competing products began to permeate the market at the beginning of 1997, and by October nearly 50% of all ISPs within the United States supported some form of 56k technology. V.90 merged the two competing designs into an entirely new standard that would receive strong industry support.
SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannelThe True Cost Of BitcoinNew Mind2021-03-13 | The power consumption of the Bitcoin network is a direct result of the mechanism by which it establishes trust to its participants. Bitcoin was developed to create a decentralized electronic currency system, and since the direct transfer of assets is not plausible electronically, information now becomes the store of value. But unlike the traditional concept of a commodity, there is no actual definition of an object that represents a Bitcoin. Rather, the network operates on a ledger that is accepted by all participants.
This ledger contains the entire transaction history of the Bitcoin network, representing the changes of ownership, of amounts of a definition less entity called Bitcoin, since its genesis in 2009. This shared ledger is maintained by thousands of computers worldwide, called nodes, operating on a peer-to-peer network. Each node keeps a separate copy of the entire ledger and combined they form the public permission less voting system that validates every transaction. When a transaction occurs, the sender’s balance from a previous Bitcoin transaction is transferred to one or more public recipients of an asymmetric cryptographic key pair.
Once a transaction is created, it’s sent to the closest node where it is subsequently distributed throughout the network. A special transaction, known as a block reward is also added to the block, as an incentive mechanism for miners to build upon the network by block creation.
Each mining node can independently choose which transactions will appear in a new block, but only one can earn the authority to add their block to the chain of existing blocks, that every participant on the network accepts as the Bitcoin blockchain. Finding this hash is called proof of work or PoW. Once a valid hash is found, the new block is broadcasted to the rest of the network where it is validated and added to each node’s copy of the blockchain.
As of February 2021, it takes roughly 90 sextillion hash guesses to create a valid bitcoin block. This dramatic rise in the needed computational power is a direct result of an inbuilt mechanism of the bitcoin network that raises or lowers the lead zero count requirement of a block hash, in order to keep the average creation time of a new block to around 10 minutes.
At its inception, each bitcoin block created, rewarded 50 bitcoin. As of February 2021, one block reward is worth 6.25 bitcoin.
POWER CONSUMPTION
As the value of bitcoin rises, miners collectively unleash more computing power at the network to capitalize from the higher prices. This inherently forces the network difficulty to increase and eventually, an equilibrium is reached between the profitability of mining and network difficulty. And within this feedback loop that regulates the network, lies a key link between bitcoin and a real-world resource, power consumption.
As of February 2021, the total network hash rate has hovered around one hundred fifty quintillion block hashes calculated every second, globally. In this case, the total hash rate of the network can be said to be 150 million TH/s. Because these devices are the most efficient miners on the network, it sets the theoretically lower limit of energy consumption at 4.5 gigajoules per second or about 40 terawatt-hours per year if the current total hash rate is maintained. This approach assumes that all mining participants in the network aim to make a profit and that all new Bitcoins produced by mining, must at least, be higher on average to the\an operating costs of mining.
SCALE OF POWER
Even the most conservative estimate of the network ranks it as high as the 56th most power-consuming country in the world, New Zealand. At the current best estimate of the network’s power consumption levels, each bitcoin transaction took around 700-kilowatt-hours to process.
Further compounding bitcoin's power consumption issues is the fact that mining hardware must run continuously to be effective. This makes it difficult to employ excess power generation strategically for mining use, effectively making mining consumption a baseline power demand on infrastructure.
In fact, it’s estimated that China single-handedly operates almost 50% of the bitcoin network, with the nation of Georgia following in second with a little over 25% of all mining, and the US in 3rd place with 11.5%. The annual carbon footprint of the bitcoin network is estimated to be around 37 Mt of carbon dioxide.
FUTURE
Alternative consensus mechanisms, like proof of stake (PoS), have been developed to address the power consumption associated with proof of work.
Many experts still warn that in its current growth trajectory, it is simply unsustainable for bitcoin to become a global reserve currency as this would require the network to consume a significant portion of all energy produced globally.
The millennia-old idea of expressing signals and data as a series of discrete states had ignited a revolution in the semiconductor industry during the second half of the 20th century. This new information age thrived on the robust and rapidly evolving field of digital electronics. The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures.
THE MAC In a digital neural network implementation, the weights and input data are stored in system memory and must be fetched and stored continuously through the sea of multiple-accumulate operations within the network. This approach results in most of the power being dissipated in fetching and storing model parameters and input data to the arithmetic logic unit of the CPU, where the actual multiply-accumulate operation takes place. A typical multiply-accumulate operation within a general-purpose CPU consumes more than two orders of magnitude greater than the computation itself.
GPUs Their ability to processes 3D graphics requires a larger number of arithmetic logic units coupled to high-speed memory interfaces. This characteristic inherently made them far more efficient and faster for machine learning by allowing hundreds of multiple-accumulate operations to process simultaneously. GPUs tend to utilize floating-point arithmetic, using 32 bits to represent a number by its mantissa, exponent, and sign. Because of this, GPU targeted machine learning applications have been forced to use floating-point numbers.
ASICS These dedicated AI chips are offer dramatically larger amounts of data movement per joule when compared to GPUs and general-purpose CPUs. This came as a result of the discovery that with certain types of neural networks, the dramatic reduction in computational precision only reduced network accuracy by a small amount. It will soon become infeasible to increase the number of multiply-accumulate units integrated onto a chip, or reduce bit- precision further.
LOW POWER AI
Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power.
Much of the industry believes that the digital aspect of current systems will need to be augmented with a more analog approach in order to take machine learning efficiency further. With analog, computation does not occur in clocked stages of moving data, but rather exploit the inherent properties of a signal and how it interacts with a circuit, combining memory, logic, and computation into a single entity that can operate efficiently in a massively parallel manner. Some companies are beginning to examine returning to the long outdated technology of analog computing to tackle the challenge. Analog computing attempts to manipulate small electrical currents via common analog circuit building blocks, to do math.
These signals can be mixed and compared, replicating the behavior of their digital counterparts. However, while large scale analog computing have been explored for decades for various potential applications, it has never been successfully executed as a commercial solution. Currently, the most promising approach to the problem is to integrate an analog computing element that can be programmed,, into large arrays, that are similar in principle to digital memory. By configuring the cells in an array, an analog signal, synthesized by a digital to analog converter is fed through the network.
As this signal flows through a network of pre-programmed resistors, the currents are added to produce a resultant analog signal, which can be converted back to digital value via an analog to digital converter. Using an analog system for machine learning does however introduce several issues. Analog systems are inherently limited in precision by the noise floor. Though, much like using lower bit-width digital systems, this becomes less of an issue for certain types of networks.
If analog circuitry is used for inferencing, the result may not be deterministic and is more likely to be affected by heat, noise or other external factors than a digital system. Another problem with analog machine learning is that of explain-ability. Unlike digital systems, analog systems offer no easy method to probe or debug the flow of information within them. Some in the industry propose that a solution may lie in the use of low precision high speed analog processors for most situations, while funneling results that require higher confidence to lower speed, high precision and easily interrogated digital systems.
During the middle ages, the concept of the perpetual motion machine would develop. The first law, known as the Law of Conservation of Energy, would prohibit the existence of a perpetual motion machine, by preventing the creation or destruction of energy within an isolated system.
MAXWELL’S DEMON
In 1867 James Clerk Maxwell, the Scottish pioneer of electromagnetism, conceived of a thermodynamic thought experiment that exhibited a key characteristic of a thermal perpetual motion machine. Because faster molecules are hotter, the "beings" actions cause one chamber to warm up and the other to cool down, seemingly reversing the process of a heat engine without adding energy.
ENTROPY
Despite maintaining the conservation of energy, both Maxwell’s demon and thermal perpetual motion machines, contravened, arguably one of the most unrelenting principles of thermodynamics. This inherent, natural progression of entropy towards thermal equilibrium directly contradicts the behavior of all perpetual motion machines of the second kind.
BROWNIAN MOTION
In 1827, Scottish botanist Robert Brown, while studying the fertilization of flowering plants, began to investigate a persistent, rapid oscillatory motion of microscopic particles that were ejected by pollen grains suspended in water. Called Brownian motion, this phenomenon was initially attributed to thermal convection currents within the fluid. However, this would soon be abandoned as it was observed that nearby particles exhibited uncorrelated motion. Furthermore, the motion was seemingly random and occurred in any direction.
These conclusions had led Albert Einstein in 1905 to produce his own quantitative theory of Brownian motion. And within his work, Brownian motion had indirectly confirmed the existence of atoms of a definite size. Brownian motion would tie the concepts of thermodynamics to the macroscopic world.
BROWNIAN RATCHET In 1900, Gabriel Lippman, inventor of the first color photography method, proposed an idea for a mechanical thermal perpetual motion machine, known as the Brownian ratchet. The device is imagined to be small enough so that an impulse from a single molecular collision, caused by random Brownian motion, can turn the paddle. The net effect from the persistent random collisions would seemingly result in a continuous rotation of the ratchet mechanism in one direction, effectively allowing mechanical work to be extracted from Brownian motion.
BROWNIAN MOTOR During the 1990s, using Brownian motion to extract mechanical work would re-emerge in the field of Brownian motor research. Brownian motors are nanomachines that can extract useful work from chemical potentials and other microscopic nonequilibrium sources.
In recent years, they’ve become a focal point of nanoscience research, especially for directed-motion applications within nanorobotics.
ELECTRICAL BROWNIAN MOTION
In 1950, french physicist Léon Brillouin proposed an easily constructible, electrical circuit analog to the Brownian ratchet. Much like the ratchet and pawl mechanism of the Brownian ratchet, the diode would in concept create a "one-way flow of energy", producing a direct current that could be used to perform work. However, much like the Brownian ratchet, the "one-way" mechanism once again fails when the entire device is at thermal equilibrium.
In early 2020, a team of physicists at the University of Arkansas would make a breakthrough in harvesting the energy of Brownian Motion. Instead of attempting to extract energy from a fluid, the team exploited the properties of a micro-sized sheet of freestanding graphene. At room temperature graphene is in constant motion. The individual atoms within the membrane exhibit Brownian motion, even in the presence of an applied bias voltage.
The team created a circuit that used two diodes to capture energy from charge flow created by the graphene’s motion. In this state, the graphene begins to develop a low-frequency oscillation that shifts the evenly distributed power spectrum of Brownian motion to lower frequencies. The diodes had actually amplified the power delivered, rather than reduce it, suggesting that electrical work was done by the motion of the graphene despite being held at a single temperature. Despite contradicting decades of philosophical analysis, the team behind this experiment concluded that while the circuit is at thermal equilibrium, the thermal exchange between the circuit and its surrounding environment is in fact powering the work on the load resistor.
Graphene power generation could be incorporated into semiconductor products, providing a clean, limitless, power source for small devices and sensors.
DESCRIPTION The story of technology is one of convergence. It is ideas applied; forged from multiple disciplines, all coinciding at the right place and at the right time.
This video is an account of a tiny sliver of that story. Where a novel concept, born out of the explosion of discovery at the turn of the 20th century, would slowly gravitate towards a problem that lied a century away.
FOR MORE INFORMATION ON NordVPN https://www.youtube.com/nordvpnThe Science Of BoostNew Mind2020-12-02 | By design, reciprocating engines are air pumps. They compress the aspirated air-fuel charge, ignite it, convert this expansion of hot gases into mechanical energy, and then expel the cooler, lower pressure gases. The amount of energy converted is determined by the pressure exerted on its pistons by combustion and the length of its expansion cycle. By increasing how aggressively a given mass of air-fuel charge is compressed, higher combustion pressures are achieved, allowing more energy to be extracted and thus creating more mechanical power output.
ROOTS SUPERCHARGER In 1859 two brothers Philander Higley Roots and Francis Marion Roots founded The Roots Blower Company in Connersville, Indiana.
Roots superchargers operate by pumping air with a pair of meshing lobes resembling a set of stretched gears. The incoming air is trapped in pockets surrounding the lobes and carried from the intake side to the exhaust of the blower.
TWIN-SCREW SUPERCHARGERS In 1935, Swedish engineer Alf Lysholm patented a new air pump design as well as a method for its manufacture that improved upon the limitations of Roots blowers. Lysholm had replaced the lobes with screws, creating the rotary-screw compressor.
CENTRIFUGAL SUPERCHARGERS...
INTERCOOLERS
Forcing more air into a cylinder with boost easily creates more power in an engine by increasing the air mass of the intake charge beyond what is possible with natural aspiration. This also inherently pushes volumetric efficiency well beyond 100%
Because forced induction occurs outside of the engine the properties of the air mass can be further enhanced by cooling, by passing the compressed air through a heat-exchange device known as an intercooler.
TURBOCHARGERS
In some extreme cases, it can take as much as ⅓ of the base engine's power to drive the supercharger to produce a net gain in power.
The first turbocharger design was patented in 1905 by Swiss Engineer Alfred Büchi. He had conceptualized a compound radial engine with an exhaust-driven axial flow turbine and compressor mounted on a common shaft.
Turbochargers work by converting the heat and kinetic energy contained within engine exhaust gases, as they leave a cylinder. Radial inflow turbines work on a perpendicular gas flow stream, similar to a water wheel.
This shaft is housed within the center section of a turbocharger known as the center hub rotating assembly. Not only must it contain a bearing system to suspend the shaft spinning at 100,000s of RPMs, but it must also contend with the high temperatures created by exhaust gases.
In automotive applications, the bearing system found in most turbochargers are typically journal bearings or ball bearings. Of the two, journal bearings are more common due to its lower costs and effectiveness. It consists of two types of plain bearings; cylindrical bearings to contain radial loads and a flat thrust bearing to manage thrust loads.
Turbine aspect ratio - This is the ratio of the area of the turbine inlet relative to the distance between the centroid of the inlet and the center of the turbine wheel.
Compressors Trim -This is the relationship between the compressor wheels’ inducer and exducer diameter.
WASTEGATES
In order to prevent safe pressures and speeds from being exceeded, a mechanism called a wastegate is employed. Wastgates work by opening a valve at a predetermined compressor pressure that diverts exhaust gases away from the turbine, limiting its rpm. In its most common form, wastegates are integrated directly into the turbine housing, employing a poppet type valve. The valve is opened by boost pressure pushing a diaphragm against a spring of a predetermined force rating, diverting exhaust gases away from the turbine.
BLOW OFF VALVES
On engines with throttles, such as gasoline engines, a sudden closing of the throttle plate with the turbine spinning at high speed causes a rapid reduction in airflow beyond the surge line of the compressor. A blow-off valve is used to prevent this.
MULTI-CHARGING
Twincharging started to appear in commercial automotive use during the 1980s, with Volkswagen being a major adopter of the technology. In its most common configuration, a supercharger would feed directly into a larger turbocharger.
TWIN-SCROLL TURBOCHARGER
Twin-scroll turbochargers have two exhaust gas inlets that feed two gas nozzles. One directs exhaust gases to the outer edge of the turbine blades, helping the turbocharger to spin faster, reducing lag, while the other directs gases to the inner surfaces of the turbine blades, improving the response of the turbocharger during higher flow conditions.
VARIABLE GEOMETRY Variable-geometry turbochargers are another example of turbocharger development. They generally work by allowing the effective aspect ratio of the turbocharger’s turbine to be altered as conditions change.
SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannelBoring Through The Earths CrustNew Mind2020-10-21 | Over the course of the 1960s into the 80s, several interdisciplinary geoscientific research projects such as the Upper Mantle Project, the Geodynamics Project, and the Deep Sea Drilling Project, contributed significantly to a better understanding of the earth's structure and development. In the 1960s, several prominent research organizations such as the National Academy of Sciences and the Ministry of Geology of the USSR initiated exploratory programs that used deep drilling to study the internals of the Earth. The aim of the program was to develop a model of the Earth’s crust and upper mantle, as well as new methods for forecasting mineral deposits. It developed a fundamentally new technical approach to the study of the deep structure of the Earth’s crust and upper mantle, based on a combination of seismic depth-sensing, deep drilling data, and other geophysical and geochemical methods.
These studies resulted in technologies that advanced both super-deep drilling and geological logging, in boreholes over 10 km deep.
DRILLING
In cable-tool drilling each drop would transmit force through a series of heavy iron drilling columns known as strings, driving a variety of bits deep into the borehole. Rotary drills utilized a hollow drill stem, enabling broken rock debris to be circulated out of the borehole, along with mud, as the rotating drill bit cut deeper.
PROJECT MOHOLE
The project’s goal was to drill through the Earth’s crust to retrieve samples from the boundary between the earth's crust and the mantle, known as the Mohorovicic discontinuity or Moho. Planned as a multi-hole, three-phase project, it would ultimately achieve a drill depth of 183 meters under the pacific seafloor, at a depth of 3.6 km. Despite Project Mohole’s failure in achieving its intended purpose, it did show that deep-ocean drilling was a viable means of obtaining geological samples.
USSR'S RESPONSE
The Kola Superdeep Borehole had a target depth set at 15,000 meters, and in 1979, it had surpassed the 9,583-meter vertical depth record held by the Bertha Rogers hole, a failed oil-exploratory hole drilled in Washita County, Oklahoma, in 1974. By 1984, the Soviets had reached a depth of over 12,000 meters. Drilling would later restart from 7,000 meters. Finally, in 1989, after grinding through crystalline rock for more than half its journey, the drill bit reached a final reported depth of 12,262 meters, the deepest artificial point on Earth.
Though this fell short of the projected 1990 goal of 13,500 meters, drilling efforts continued despite technical challenges. However, in 1992, the target of 15,0000 meters was eventually deemed impossible after the temperatures at the hole’s bottom, previously expected to reach only 100 degrees C, was measured at over 180 degrees C. Ultimately, the dissolution of the Soviet Union led to the abandonment of the borehole in 1995. The Kola Superdeep Borehole was surpassed, in length only, by the slant drilled Al Shaheen oil well in Qatar, which extended 12,289 meters, though with a horizontal reach of 10,902 meters.
HOW IT WAS DRILLED
A 215mm diameter bit was rotated by a downhole turbine that was powered by the hydraulic pressure of ground-level mud pumps. A downhole instrument consisting of a generator, a pulsator, and a downhole measuring unit that measures the navigation and geophysical parameters were fitted to the drill. The pulsator converts the measured data into pressure pulses that propagate through the fluid barrel in the drilling tool and are received by pressure sensors at the surface. At the surface, the signal received by pressure sensors is sent to the receiving device, where it is amplified, filtered, and decoded for control and recording use.
The downhole instrument is powered by the generator, which uses the movement of flushing fluid as a power source.
WHAT WAS FOUND
Rock samples taken from the borehole exposed cycles of crust-building that brought igneous rock into the crust from the mantle below. Additionally, one of the primary objectives of the Kola well was to penetrate through the upper layer of granite into the underlying basaltic rock. Even more astonishing, was the discovery of a subterranean layer of marine deposits, almost 7,000 meters beneath the surface, that were dated at two billion years old, and contained the fossil traces of life from 24 different species of plankton.
Similar projects have taken place since the drilling of the Kola Superdeep borehole. One such notable example was the German Continental Deep Drilling Program, which was carried out between 1987 and 1995, reaching a depth of over 9,000 meters and using one of the largest derricks in the world. From this, the drilling project San Andreas Fault Observatory at Depth or SOFAD was formed in 2002.
The rapid expansion of software from simple text-based tools to massively complex, feature-rich, highly visual products would dominate the mass-market computing world during the 1980s and 90s. And with this push, came a higher demand on processors to both efficiently utilize more memory and grow in computing power, all while keeping costs at consumer accessible levels.
RISE OF 32-BIT
During the mid-1980s, in response to the growing demands of software, the opening moves towards the mainstream adoption of 32-bit processor architecture would begin. While 32-bit architectures have existed in various forms as far back as 1948, particularly in mainframe use, at the desktop level only a few processors had full 32-bit capabilities. Produced in speeds ranging from 12Mhz to 33Mhz, the 68020 had 32 bit internal and external data buses as well as 32-bit address buses. It’s arithmetic logic unit was also now natively 32-bit, allowing for single clock cycle 32-bit operations.
One year later, Intel would introduce its own true 32-bit processor family, the 80386. Not only did it offer a new set of 32-bit registers and a 32-bit internal architecture, but also built-in debugging capabilities as well as a far more powerful memory management unit, that addressed many of the criticisms of the 80286.
This allowed most of the instruction set to target either the newer 32-bit architecture or perform older 16-bit operations. With 32-bit architecture, the potential to directly address and manage roughly 4.2 GB of memory proved to be promising. This new scale of memory addressing capacity would develop into the predominant architecture of software for the next 15 years.
On top of this, protected mode can also be used in conjunction with a paging unit, combining segmentation and paging memory management. The ability of the 386 to disable segmentation by using one large segment effectively allowed it to have a flat memory model in protected mode. This flat memory model, combined with the power of virtual addressing and paging is arguably the most important feature change for the x86 processor family.
PIPLINING
CPUs designed around pipelining can also generally run at higher clock speeds due to the fewer delays from the simpler logic of a pipeline’s stage. The instruction data is usually passed in pipeline registers from one stage to the next, via control logic for each stage.
Data inconsistency that disrupts the flow of a pipeline is referred to as a data hazard. Control hazards are when a conditional branch instruction is still in the process of executing within the pipeline as the incorrect branch path of new instructions are being loaded into the pipeline.
One common technique to handle data hazards is known as pipeline bubbling. Operand forwarding is another employed technique in which data is passed through the pipeline directly before it’s even stored within the general CPU logic. In some processor pipelines, out-of-order execution is use to helps reduce underutilization of the pipeline during data hazard events.
Control hazards are generally managed by attempting to choose the most likely path a conditional branch will take in order to avoid the need to reset the pipeline.
CACHING
In caching a small amount of high-speed static memory, is used to buffer access to a larger amount of lower-speed but less expensive, dynamic memory.
A derived identifier, called a tag, that points to the memory region the block represents, amongst all possible mapped regions it can represent, is also stored within the cache block. While simple to implement, direct mapping creates an issue when two needed memory regions compete for the same mapped cache block.
When an instruction invokes memory access, the cache controller calculates the block set the address will reside in and the tag to look for within that set. If the block is found, and it is marked as valid, then the data requested is read from the cache. This is known as a cache hit and it is the ideal path of memory access due to its speed. If the address cannot be found within the cache then it must be fetched from slower system memory. This is known as a cache miss and it comes with a huge performance penalty as it can potentially stall an instruction cycle while a cache update is performed.
Writing data to a memory location introduces its own complication as the cache must now synchronize any changes made to it with system memory. The simplest policy is known as a write-through cache, where data written to the cache is immediately written to system memory. Another approach known as write-back or copy-back cache, tracks written blocks and only updates system memory when the block is evicted from the cache by replacement.
DESCRIPTION - Superalloys - They also possess excellent mechanical strength and resistance to thermal creep or a permanent deformation under constant load at high temperatures. Additionally, they offer good surface stability and excellent resistance to oxidation. Superalloys achieve their high-temperature strength through an alloying process known as solid solution strengthening where the solute atom is large enough that it can replace solvent atoms in their lattice positions while leaving the overall crystal structure relatively unchanged. The casting process is especially important in the production of heat-resistant superalloys such as those used in aircraft engine components.
- Aggregated Diamon Nanorods - Some materials resist this deformation and break very sharply, without plastic deformation, in what is called a brittle failure. The measure of a material’s resistance to deformation, particularly in a localized manner is its hardness.
Diamonds have always been the standard for hardness, being the hardest material known to man. X-ray diffraction analysis had indicated that ADNRs are 0.3% denser than standard diamonds, giving rise to their superior hardness.
Testing performed on a traditional diamond with an ADNR tip produced a hardness value of 170 GPa. Still, it’s speculated that ADNR’s hardness on the Mohs scale could exceed 10, the rating of a diamond.
- Delo Monopox VE403728 - The way we utilize the properties of materials tends to occur in plain sight. Adhesives by definition are any non-metallic substance applied to one or both surfaces of two separate materials that bind them together and resist their separation. Sometimes referred to as glues or cement, they are one of the earliest engineering materials used by man.
The lap shear strength is reported as the failure stress in the adhesive, which is determined by dividing the failing load by the bond area. For comparison, a single 6mm spot weld found on the chassis of most cars typically has a lap shear strength of 20Mpa.
This substance is estimated to have a shear strength of around 60 Mpa, approaching the strength of a soldered copper joint.
- B. A. M. - How easily two materials slide against each other is determined by their coefficient of friction, a dimensionless value that describes the ratio of the force of friction between two objects and the force pressing them together. Most dry materials, against themselves, have friction coefficient values between 0.3 and 0.6. Aside from its hardness, its unique composition exhibited the lowest known coefficients of friction of dry material, 0.04 and it was able to get as low as 0.02 using water-glycol-based lubricants.
BAM is so slippery that a hypothetical 1kg block coating in the material would start sliding down an inclined plane of only 2 degrees.
- Upsalite - Similar to how the slipperiest material was discovered, the most absorbent material would also be accidentally discovered in 2013, by a group of nanotechnology researchers at Uppsala University. While pursuing more viable methods for drug delivery using porous calcium carbonate, the team had accidentally created an entirely new material thought for more than 100 years to be impossible to make. This material, mesoporous magnesium carbonate or Upsalite, is a non-toxic magnesium carbonate with an extremely porous surface area, allowing it to absorb more moisture at low humidities than any other known material.
Each nanopore is less than 10 nanometers in diameter which results in one gram of the material having 26 trillion nanopores, making it very reactive with its environment. This characteristic gives it incredible moisture absorption properties, allowing it to absorb more than 20 times more moisture than fumed silica, a material commonly used for moisture control during the transport of moisture sensitive goods.
- Chlorine Trifluoride - Chlorine trifluoride is a colorless, poisonous, corrosive, and extremely reactive gas. In fact, it is so reactive that it is the most flammable substance known. First prepared in 1930 by the German chemist Otto Ruf, it was created by the fluorination of chlorine then separated by distillation.
Because chlorine trifluoride is such a strong oxidizing and fluorinating agent it will react with most inorganic and organic materials, and will even initiate the combustion with many non-flammable materials, without an ignition source. Its oxidizing ability even surpasses oxygen, allowing it to react even against oxide-containing materials considered incombustible. It has been reported to ignite glass, sand, asbestos, and other highly fire-retardant materials. It will also ignite the ashes of materials that have already been burned in oxygen.
SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel4K 60 FPS Footage Of The First Flying Machines 1890-1910New Mind2020-07-25 | This is AI colored and upscaled compilation of footage from the early days of aviation where dangerous, bizarre contraptions attempted to take to the sky long before an understanding of aerodynamics.
DESCRIPTION In 1894, the assassination of the French President Marie Francois Sadi Carnot by an Italian Anarchist triggered a chain of events that would lead to some of the most remarkable breakthroughs in surgical medicine of the 20th century. Carnot ultimately succumbed to his knife wound due to the severing of his portal vein. At the time surgeons had no technique that could successfully reconnect blood vessels. This left a lasting impression on a young french surgeon named Alexis Carrel, who would ultimately go on to develop new techniques for suturing blood vessels.Interest in head transplantation started early on in modern surgery though it would take Alexis Carrel’s breakthrough in the joining of blood vessels or vascular anastomosis to make the procedure feasible.
In 1908, Carrel and American physiologist, Dr. Charles Guthrie, performed the first attempts at head transplantation with two dogs. They attached one dog’s head onto another dog’s neck, connecting arteries in such a way that blood flowed first to the decapitated head and then to the recipient’s head. The decapitated head was without blood flow for about 20 min during the procedure, and while the transplanted head demonstrated aural, visual, and cutaneous reflex movements early after the procedure, its condition soon deteriorated and it was euthanized after a few hours.
Throughout the 1950s and 60s advances in immunosuppressive drugs and organ transplantation techniques offered new tools and methods to overcome some of the challenges faced by previous head transplantation attempts. In 1965, Robert White, an American neurosurgeon, began his own controversial research. However, unlike Guthrie and Demikhov who focused on head transplantation, White’s goal was to perform a transplant of an isolated brain.In order to accomplish this challenging feat, he developed new perfusion techniques which maintained blood flow to an isolated brain. White created vascular loops to preserve the blood vessels and blood flow between the internal jaw area and the internal carotid arteries of the donor dog. This arrangement was referred to as "auto-perfusion" in that it allowed for the brain to be perfused by its own carotid system even after being severed at the second cervical vertebral body. Deep hypothermia was then induced on the isolated brain to reduce its function, and it was then positioned between the jugular vein and carotid artery of the recipient dog and grafted to the cervical vasculature.
It would not be until 45 years later for the next major breakthrough in head transplantation to occur. In 2015, using mice, the Chinese surgeon Xio-Ping Ren would improve upon the methods used by Robert White by utilizing a technique in which only one carotid artery and the opposite jugular vein were cut, allowing the remaining intact carotid artery and jugular vein to continuously perfused the donor head throughout the procedure.
To date, all attempts at head transplantation have been primarily limited to connecting blood vessels. However, the recent development of "fusogens" and their use in the field of spinal anastomosis or the joining of spinal nerves, has opened up a potential solution to fusing the nervous systems of the donor and the recipient during transplantation.Around the same time as Ren’s research, Italian neurosurgeon
Sergio Canavero also put forth his own head transplantation protocol that not only addressed reconnecting a spinal cord but was specifically designed for human head transplantation.Canavero’s protocol is based on acute, tightly controlled spinal cord transection, unlike what occurs during traumatic spinal cord injury or simply surgically severing it. He postulates that a controlled transection will allow for tissue integrity to be maintained and subsequent recovery and fusion to occur. His proposed technique claims to exploit a secondary pathway in the brain known as the "corticotruncoreticulo propriospinal" pathway. This gray-matter system of intrinsic fibers forms a network of connections between spinal cord segments. When the primary, corticospinal tract is injured, the severed corticospinal tract axons can form new connections via these propriospinal neurons.
One of the most overlooked issues by head transplant researchers is that of pain.
In 2015, Valery Spiridonov, a 33-year-old Russian computer scientist who suffers from a muscle-wasting disease, became the first volunteer for HEAVEN, or the "head anastomosis project" led by Canavero, becoming the first man to sign up for a head transplant. However, soon after the announcement, he withdrew from the experiment.
SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannelLaser WeaponsNew Mind2020-06-11 | The concept of using light as a weapon has intrigued weapon designers, for centuries. The first such system hypothesized was the Archimedes heat ray. Maiman operated the first functioning laser at Hughes Research Laboratories in Malibu, California.
HISTORY
In civilian applications, lasers would soon grow in power. With the ability to focus kilowatts of energy onto a small point, their use in industrial welding and cutting expanded rapidly. Though their initial military use, however, has been more indirect, being used primarily for range finding, targeting, and ordnance guidance. The first use of lasers to damage targets directly were laser blinding weapons.
Because relatively low energy levels could permanently blind combatants, their use led to the Protocol on Laser Blinding Weapons in 1995. Lower powered systems intended to temporarily blind or disorient its target, called dazzlers, are still in use today by both the military and law enforcement. Laser systems that directly use highly focused light as a ranged weapon to damage a target are part of a class of arms known as Directed Energy Weapons or DEWs.
TACTICAL LASERS
One of Boeing’s technology demonstrator consists of a modified "Avenger" air defense vehicle with a laser DEW in place of its missile launcher. As a laser source, this system uses a commercial 2 kW solid-state laser and has demonstrated its effectiveness against unmanned aerial vehicles as well as explosive devices on the ground. "Another, more powerful, tactical development by Boeing is "The Relocatable High Energy Laser System or RHELS. Raytheon has replaced the cannon with an industrial fiber laser, successfully testing the concept against a variety of targets, including incoming mortar rounds.
This heat has to be transported out of the solid-state medium, in order to avoid overheating and destroying the laser. Additionally, the non-uniform temperature distribution within the amplifier causes a higher than ideal beam divergence of the resulting laser beam, reducing the delivery energy per target area. Fiber lasers, in particular, are ideal for weapon use due to the ends of the fiber itself being used to form the laser resonator. One notable example has been Northrop Grumman’s Joint High Power Solid-State Laser program, which has produced beams in the range of 100kWs.
STRATEGIC LASERS
Power levels at this magnitude are predominately achieved by chemical lasers, a focal technology of all strategic military laser programs. Chemical lasers work by using a chemical reaction to create the beam. The involved reactants are fed continuously into the reaction chamber, forming a gas stream, which functions as the light amplifying medium for the laser. Because the gas stream is continuously being produced while spent reactants are vented out of the laser, excess heat does not accumulate and the output power is not limited by the need for cooling.
The Advanced Tactical Laser or ATL and the Airborne Laser or ABL have been the two most notable chemical laser DEW programs in recent years. What makes both of these programs so unique is that they are the first aircraft-based laser DEWs. The ATL is a technology demonstrator built to evaluate the capabilities of a laser DEW for "ultra-precise" attacks against communication platforms and vehicles. Powered by a Chemical Oxygen Iodine Laser or COIL, it’s speculated that it’s beam is capable of up to 300 kW.
Of all the laser DEW programs explored, the ABL system is arguably the most prominent and recognizable. Built around a Boeing 747 designated as YAL-1, ABL is also powered by a Chemical Oxygen Iodine Laser, though one large enough to produce a continuous output power well into the megawatt range. In addition to the incredible power of its main laser, the ABL also features an adaptive optics system, which is capable of correcting the degrading influence of atmospheric turbulence on the laser beam. On March 15, 2007, the YAL-1 successfully fired it’s laser in flight, hitting its target, a modified NC-135E Big Crow test aircraft.
By February 11, 2010, now fitted with a more powerful laser, in a test off the central California coast, the system successfully destroyed a liquid-fuel boosting ballistic missile.
Laser defense systems such as the US Navy’s XN1-LaWs, deployed on the USS Ponce and the Israeli Iron Beam air defense system are being used experimentally for low-end asymmetric threats. Though these systems are modest compared to the promises of the multi-billion dollar programs of years past, costing less than one dollar per shot, the versatility of these smaller, less expensive, laser DEWs may prove to be the future of the technology.
SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannelThe Story Of Electric Vehicle BatteriesNew Mind2020-05-26 | The Tesla 2170 Lithium-Ion battery cell and other high capacity lithium-ion battery cell technologies all represent the first hopeful steps in transitioning society towards a new standard in practical and economical transportation via electric vehicles.
HOW BATTERIES WORK
The modern incarnation of the electrochemical battery is credited to the Italian scientist Alessandro Volta, who put together the first battery in response to the misguided findings of his colleague, Luigi Galvani. Volta suspected that the electric current came from the two dissimilar metals and it was being transmitted through the frogs’ tissues, not originating from it. Volta had developed the first electrochemical battery, known as a voltaic pile.
Individual cells can be combined into configurations that can both increase the total voltage and current capacity. This is known as a battery. On primary batteries, the electrodes become depleted as they release their positive or negative ions into the electrolyte, or the build-up of reaction products on the electrodes prevents the reaction from continuing. This results in a one-time use battery.
In secondary batteries, the chemical reaction that occurred during discharge can be reversed.
FIRST RECHARGEABLE BATTERY
In 1859, the French physicist Gaston Planté would invent the lead-acid battery, the first-ever battery that could be recharged. By the 1880s, the lead-acid battery would take on a more practical form with each cell consisting of interlaced plates of lead and lead dioxide.
In the early 1900s, the electric vehicle began to grow in popularity in the United States, after thriving in Europe for over 15 years. Within a few years, most electric vehicle manufacturers had ceased production.
NiMH
In the late 1960s, research had begun by the global communications company COMSAT, on a relatively new battery chemistry called nickel-hydrogen. Designed specifically for use on satellites, probes, and other space vehicles, these batteries used hydrogen stored at up to 82 bar with a nickel oxide hydroxide cathode and a platinum-based catalyst anode that behaved similarly to a hydrogen fuel cell. The pressure of hydrogen would decrease as the cell is depleted offering a reliable indicator of the batteries charge.
Though nickel-hydrogen batteries offered only a slightly better energy storage capacity than lead-acid batteries, their service life exceeded 15 years and they had a cycle durability exceeding 20,000 charge/recharge cycles. By the early 1980s their use on space vehicles became common. Over the next two decades research into nickel-metal hydride cell technology was supported heavily by both Daimler-Benz and by Volkswagen AG resulting in the first generation of batteries achieving storage capacities similar to nickel-hydrogen, though with a 5 fold increase in specific power. This breakthrough led to the first consumer-grade nickel-metal hydride batteries to become commercially available in 1989.
REVIVAL OF ELECTRIC CARS
Almost 100 years after the first golden age of electric vehicles, a confluence of several factors reignited interest in electric vehicles once again. This initiative intersected with the recent refinement of nickel-metal hydride battery technology, making practical electrical vehicles a viable commercial option to pursue. By the late 1990s, mass-market electric vehicle production had started once again. Taking a more risk-averse approach, many automakers started to develop all-electric models based on existing platforms in their model line up.
MODERN ELECTRIC CARS
Despite lithium-ion batteries becoming a viable option for electric vehicles, the second half of the 1990s into the mid-2000s were primarily dominated by the more risk-averse technology of hybrid-powered vehicles. And even these successful early models such as the Toyota Prius were generally still powered by Nickel-metal hydride battery technology.
At the time lithium-ion batteries were still relatively unproven for vehicle use and also cost more per kWh. Around 2010, The cathode material of lithium-ion cells would once evolve with the advent of lithium nickel manganese cobalt oxide cathodes or NMC. Curiously, Tesla is known for being the only manufacturer who does not use NMC cell technology but rather much older lithium nickel cobalt aluminum oxide cathode, or NCA.
COBALT
With the surge in consumer adoption of electric vehicles, comes a rise in the demand for the lithium-ion batteries that power them. While roughly half of the cobalt produced is currently used for batteries, the metal also has important uses in electronics, tooling, and superalloys like those used in jet turbines. More than half of the world’s cobalt comes from the Democratic Republic of the Congo. With no state regulation, cobalt mining in the region is also plagued with exploitative practices.
SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannelHow N95 Masks Stop VirusesNew Mind2020-05-12 | Preventing a pathogen from entering our respiratory system, at first glance, may seem obvious. The first thought might be to trap them by preventing particles from moving through a filter. But looking deeper at the problem reveals the true scope of the challenge.
With every normal breath we take, we inhale around a half-liter of air. The pressure difference between the atmosphere and our lungs during inhalation, peaks at around 8 cm of water. For comparison, a typical shop vac can pull a vacuum of around 200 cm of water or about 25 times that of our lungs.
Pathogens vary widely in size with bacteria generally ranging in size from 1-20 um to viruses which can range from 17nm up to 750nm. The rhinovirus that causes the common cold, for example, is around 30nm in diameter, while HIV, SARS-COV-2, and some strains of influenza hover around 120nm.
TYPES OF RESPIRATORS
N95 respirators are part of a class of respiratory protection devices known as mechanical filter respirator. These mechanically stop particles from reaching the wearer's nose and mouth. Another form of respiratory protection is the chemical cartridge respirator. These are specifically designed to chemically remove harmful volatile organic compounds and other vapors from the breathing air. Both classes of respirators are available in powered configurations, known as powered air-purifying respirators.
N95
The N95 designation is a mechanical filter respirator standard set and certified by the National Institute for Occupational Safety and Health in the United States. The number designates the percentage of airborne particles removed, not their size. While ratings up to N100, that can filter 99.97% of airborne particles exist, N95 respirators were determined to be suitable for short-term health care use, in the 1990s.
Other designations include oil-resistant R and oil proof P respirators, which are designed to be more durable and maintain filter effectiveness against oily particles in industrial use. Surgical grade N95 respirators possessing fluid resistance were specifically cleared by the United States Food And Drug Administration for medical use.
HOW THEY WORK
Modern mechanical filter respirators work, not by ‘netting’ particles but rather by forcing them to navigate through a high surface area maze of multiple layers of filter media. This concept allows for large unobstructed paths for air to flow through while causing particles to attach to fibers due to a number of different mechanisms.
In order to achieve the high surface area required, a non-woven fabric manufacturing process known as "melt-blow" is used for the filter media. In this technique high temperature, high-pressure air is used to melt a polymer, typically polypropylene, while it’s spinning. This produces a tough yet flexible layer of material composed of small fibers. Depending on the specifications of the layer being produced, these fibers can range from 100um all the way down to about 0.8um in diameter.
How these fibers capture particles are determined by the movement of air through the filter media. The path of air traveling around a fiber moves in streams. The likelihood of a particle to stay within this stream is primarily determined by its size.
The largest particles in the air tend to be slow-moving and predominantly settle out due to gravity. Particles that are too small for the effects of gravity, down to around 600 nm, are primarily captured by inertial impact and interception.
Inertial impaction occurs on larger particles in this size range.
In contrast, particles below 100nm are mainly captured through a mechanism known as diffusion. Random movements of air molecules cause these very small particles to wander across the air stream due to Brownian motion. Because the path taken through the filter is drawn out, the probability of capture through inertial impact or interception increases dramatically, particularly at lower airflow velocities.
EFFICIENCY
Because of the complex, overlapping methods by which particle filtration occurs, the smallest particles are not the most difficult to filter. In fact, the point of lowest filter efficiency tends to occur where the complementing methods begin to transition into each other, around 50-500 nm. Particles in this range are too large to be effectively pushed around by diffusion and too small to be effectively captured by the interception or inertial impaction. This also happens to be the range of some of the more harmful viral pathogens. Interestingly, the more a respirator is worn, the more efficient it becomes.
FLAWS
The weakest point on any respirator is how well it seals against the face. Air will always pass through facial leaks because they offer much lower resistance than the respirator, carrying particles with it.
Paper was first created in China during the first century A. Initially made from cotton and linen, these fabric papers were expensive to produce and were generally reserved for permanent writing. Because of paper’s value, more trivial, temporary writing was done on reusable, clay or wax tablets. By the 19th century, the industrial revolution brought about the invention of wood pulping and industrial paper mills, making paper production inexpensive and widely available.
FIRST PAPER CLIPS
By dividing the processes of drawing, straightening, forming, and cutting iron into over a dozen individual tasks, each done by a dedicated laborer, pin production became over 1000 times more efficient. Where a single man could barely create 30 pins in a day, this early use of the assembly lines would easily yield production rates of over 30,000 pins.
WIRE TO CLIPS
Advancements in both metallurgy and mechanization would finally bring about the marvel of modern paper holding technology, the paperclip. The key to this shift from the pins to clips occurred during the 1850s with the introduction of low cost, industrially produced steel. During the last few decades of the 19th century, thousands of patents were issued for almost every shape of formed steel wire that could be conceivably used as a commercial product.
THE FIRST PAPER CLIPS
Among these early steel, wire-based products were the first paper clips. The earliest known patent for a paper clip was awarded in the United States to Samuel B. Some of these designs, such as the bow-shaped Ideal paper clip and the two-eyed Owl clip, can still be found in use today. Many were created to address specific challenges of managing paperwork.
GEM PAPER CLIPS
Among them, the "Gem Manufacturing Company'' had arisen as the namesake behind this design with a reference appearing in an 1883 article, touting the benefits of the "Gem Paper-Fastener". However, no illustrations existed of these early "Gem paper-clips" making it unclear if they truly did invent the modern Gem paperclip. Interestingly, aside from Cushman and Denison’s branding claim, even 30 years after its first appearance, the Gem-style paper clip still remained unpatented. Even stranger, in 1899 a patent was granted to William Middlebrook of Waterbury, Connecticut for a "Machine for making wire paper clips." Within the pages of his patent filing was a drawing clearly showing that the product produced was a Gem-style paperclip.
OTHER CLAIMS
There have been several other unsubstantiated claims to the invention of the modern paperclip. His paper-binding invention was illustrated within his book, though it looked more like a modern cotter pin than a contemporary gem-style paper clip. In 1901, Vaaler was liberally granted patents both in Germany and in the United States, for a paper clip of similar design, though it lacked the final bend found in gem-paper clips. Vaaler would go on to become a national myth posthumously, based on the false assumption that the paper clip was invented by an unrecognized Norwegian prodigy.
The gem-style paper clip would remain mostly unchanged over the next 120 years. It would even become a symbol throughout the century. The paperclip would even be commemorated on the nation’s stamp in 1999. Many manufacturers have even attempted to improve on the design by added ripples for a better grip.
Still, the simple steel wire gem-style paper clip remains a staple of basic office supply needs even today. It’s ease of use, effectiveness at gripping and storing papers without tangling or damaging them have made it one of the few inventions in human history that has proven to be difficult to improve upon.
SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchan...The Story Of Large Vessel EnginesNew Mind2020-01-03 | A look at the evolution of the engines that power large cargo vessels, as they evolved over the last 100 years. Starting with coal-driven steam reciprocating engines such as the triple expansion engines, steam turbines and finally to modern diesel marine engines. The different configurations of marine diesel are also explored and how their characteristics lend themselves to powering the largest ships in the world.
“Stock footage provided by Videvo, downloaded from https://www.videvo.net”The Science Of Small DistancesNew Mind2019-12-19 | We explore the precise measurement and machining of small distances and their importance on modern industrial society. The history of the meter and distance measurements are explained as well as intuitive examples of small distances given, moving from mm scales to the realm of microns. Further, we discuss some of the engineering issues that emerge as we try to machine at smaller tolerances such as fitment, assembly, and thermal expansion.