Parth G
Why Real Atoms Dont Look Like This - Quantum Numbers to Understand Atomic Structure by Parth G
updated
Niels Bohr was a genius in his own right, contributing hugely to the developing theory of quantum mechanics. In this video, we take a look at what is probably his most famous work - the Bohr Model of the atom.
Before Bohr came along, scientists suspected that atoms contained scattered regions of positive charge, with small negative electrons distributed throughout. This was known as the Plum Pudding model.
Ernest Rutherford, conducting the Gold Foil / Geiger Marsden experiment with his students, realized that this could not be right. When positive alpha particles were fired at a thin gold foil, instead of them all passing right through with minimal deflection, something curious was observed.
Some passed through with minimal deflection, others passed through with large deflections of around 90 degrees, and a very small proportion actually came back almost towards the detector - at nearly 180 degrees of deflection. Rutherford said this was like firing a shell at a piece of tissue paper, and the shell coming back to hit you - very unexpected.
He realized that the positive regions in the atoms must have been distributed over very small regions, with all the positive charge being concentrated there. So when the alpha particles came very close to these regions, they would deflect hugely. A glancing blow resulted in the roughly 90 degree deflections. However because atoms were mainly empty space, the large majority of alpha particles passed right through the gold foil.
Rutherford then developed his Planetary model - with electrons orbiting the positive region known as the nucleus. This was great, but had problems of its own. If electrons were to orbit the nucleus, then they would be accelerating due to their constant change in direction.
The physics of charged objects told us that accelerating charges would radiate, and lose energy. This can be seen from the Larmor formula, which calculates the given power radiated by a charge at a given acceleration.
Therefore, Rutherford's atoms should have been unstable, with the electrons radiating energy away and spiraling inward to the nucleus. This is where Bohr came in.
Bohr realized that there was something holding electrons specific distances away from the nucleus. He called these "allowed" locations "energy levels". His model explained why electrons did not radiate constantly, and also explained the emission spectra observed from atoms. Instead of emitting radiation at all frequencies as electrons spiraled, we would only see specific emissions based on the differences in energies of the allowed levels, whenever electrons transitioned inward.
He also found a wonderfully neat mathematical relationship that explained where the "allowed" energy levels were in relation to the nucleus. He found that an electron's angular momentum in a particular energy level had to be a multiple of the Reduced Planck Constant - a very important constant in quantum mechanics.
The angular momentum of the electron, dependent on the mass and speed of the electron, as well as the radius of the path it was moving on, could be calculated by setting the electric attraction force between the electron and the nucleus to be equal to the centripetal force needed to keep it moving on that orbit. From there, with a bit of math, it was possible to calculate exactly where (i.e. at what radii) the allowed energy levels could be found!
Thanks for watching, please do check out my links: MERCH - parth-gs-merch-stand.creator-spring.com INSTAGRAM - @parthvlogs PATREON - patreon.com/parthg MUSIC CHANNEL - Parth G Music Here are some affiliate links for things I use! Quantum Physics Book I Enjoy: amzn.to/3sxLlgL My Camera: amzn.to/2SjZzWq ND Filter: amzn.to/3qoGwHk Chapters:
0:00 - Niels Bohr - An Introduction
1:12 - The Plum Pudding and Planetary Models of the Atom
3:16 - The Big Problem with Rutherford's Model
5:14 - The Bohr Model of the Atom
6:17 - What Are the "Allowed" Energy Levels?
8:31 - More About Bohr
Quantum mechanics, like other theories, is based on mathematical assumptions or "postulates". In this video we look at two such postulates. We can then look at mathematical consequences of each postulate, and test this against what actually happens in real life. If our experiments agree with the "consequences" we calculated earlier, then there's a good chance that our mathematical model represents our universe pretty well.
One postulate of quantum mechanics states that any system we study can be completely described by a mathematical function known as a wave function. It also says that the square modulus of the wave function is directly related to experimental probabilities (such as the likelihood of finding a particle in a particular region of space).
It's not fully clear to us yet why the wave function is related to measurement probabilities in this way - quantum mechanics doesn't tell us this, as it's only a mathematical framework.
Another postulate states that the mathematical equivalent of making a measurement is to apply an operator to the wave function. Again, the WHY behind this is unclear. But the math does work out, in that our experimental results do really seem to reflect how measurement operators applied to wave functions behave.
Because there's a lack of clarity between our mathematical framework, and the actual inner workings of the universe, but at the same time the math is so strong at accurately predicting how our universe behaves, physicists have tried to come up with different interpretations of quantum mechanics.
The one we've been working in so far is the Copenhagen interpretation. It's also worth noting that quantum physics seemed to break otherwise important "rules" of physics. For example, in relativity, one part of any system could only communicate with another as fast as light (or other EM waves) could get between them. However in quantum physics there seemed to be an instant way for particles to communicate with each other in some instances.
One interpretation of quantum mechanics, known as the Transactional interpretation, tried to solve this problem. The idea behind this is that each part of any system we study will send out wave-function-like waves both forward and backward in time. The waves from different parts of the system meet in a "quantum handshake". This is what decides what experimental result we will see in each case, and there's no instant transfer of information.
Another interpretation that solves the probabilistic nature of quantum mechanics is known as the Many Worlds interpretation. This says that every time an event occurs, the universe splits into multiple other universes, one in which each possible outcome occurs. For example, if I took a test then in some universes I would pass and in others I would fail.
Interpretations are very interesting to think about, but so far there aren't really any ways to test them experimentally in our universe. So for now, they remain theoretical concepts.
Further Reading:
Transactional Interpretation - en.wikipedia.org/wiki/Transactional_interpretation
Many Worlds Interpretation - en.wikipedia.org/wiki/Many-worlds_interpretation
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G Music
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Chapters:
0:00 - The Basis of Quantum Mechanics
1:16 - Postulate 1 of Quantum Mechanics
2:58 - Do We Know Why This is True?
3:54 - Postulate 2 of Quantum Mechanics
4:25 - Why Do We Stick with Quantum Mechanics?
5:27 - Copenhagen Interpretation
6:26 - Transactional Interpretation
7:56 - Many Worlds Interpretation
Cards:
6:21 - youtube.com/watch?v=fBR5HQ-Ja10
Hermann Weyl contributed a lot to physics and math, including showing how Maxwell's Electromagnetism could be perfectly combined with Einstein's Relativity. However outside the physics and math circles, he isn't exactly famous.
In this video we start by looking at scalar and vector fields - regions of space that can be described by a numerical value and a vector value at each point in space respectively. We also look at the gradient and curl operators, written with a downward pointing triangle known as a the "nabla" or "del" operator.
A fun mathematical fact: The curl of the gradient of any scalar field is always zero (as long as the scalar field is continuous and twice-differentiable - but all our scalar fields are defo those things lol).
This is fun for physicists too - because this leads to something known as Gauge Invariance.
In the theory of electromagnetism, magnetic fields are used to describe how forces act on other magnets placed in the field. They are vector fields, and are usually labelled with the letter B. Now it turns out that B fields can sometimes be more simply described by another type of vector field known as the "vector potential", written with the letter A. If we take the curl of A, we get the B field.
But this must mean that for any given B field, there are multiple possible (and allowed) A fields. Because if we find one A field that works, such that its curl DOES give the B field we are studying, then we can also find multiple other A fields simply by adding on the gradient of any (continuous, twice-differentiable) scalar field.
Because this way, the curl of our new field (A' or "A prime"), will be the curl of A + the curl of the added bit - which we said is zero. So we get the same physical magnetic field B from our new A' field, as we did with our old A field.
And since the scalar field could be ABSOLUTELY ANYTHING we want, we can make infinitely many allowed A fields if we know just one.
This idea is gauge invariance - and it's a redundancy in the math of physics theories such as electromagnetism, relativity, and even quantum physics. It allows us to do very exciting things such as solve mathematically different problems simply by "switching to another gauge", (such as by finding a new A' to work with).
An intuitive explanation of gauge invariance comes from looking at electric fields, themselves written as the gradient of a "scalar potential". These scalar potentials are also called "electric potential" fields, and the difference between the values at two points within one such field gives us potential difference. This is the same as the voltage we study when we learn about circuits!
Now two different potential fields can be used to describe the same electric field. We can generate two different fields simply by shifting the value of one field at every point by the same amount. This way, the potential difference between two points still stays the same. We have, in essence, seen two different gauges that give us the same physical results (i.e. the same potential difference).
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G Music
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Chapters:
0:00 - Hermann Weyl: Making Physics Redundant
1:12 - Scalar and Vector Fields, Gradient and Curl Operators
3:37 - A Fun Mathematical Coincidence
4:05 - The Vector Potential in Electromagnetism
5:24 - Gauge Invariance - the Redundancy!
7:16 - An Intuitive (but slightly hand-wavy) Description of Gauge Invariance
Videos in Cards:
2:19 - youtu.be/hI4yTE8WT88
4:39 - youtu.be/0jW74lrpeM0
Although it's a quantum form of the law of conservation of energy, its derivation isn't discussed anywhere near as much as it should in my opinion. I've even heard some people say that the equation CAN'T be derived, only verified experimentally.
Luckily, I came across a wonderful paper that outlines a very intuitive derivation. You can find the paper here: arxiv.org/pdf/physics/0610121.pdf
We start with the electromagnetic wave equation - which described electromagnetic waves (as you may have guessed from its name). One of the solutions of this equation is a sinusoidal wave showing the electric field oscillating back and forth between some field value E_0 and its negative, in both space and time.
However this solution is only a solution if a required relationship, or condition, is met. This condition can be boiled down to the idea that any photon corresponding to our wave must have an energy equal to its momentum multiplied by its speed. Luckily, this equation literally defines what a photon is - it's an object whose energy is related to its momentum in that exact way.
Now this relationship (or condition) is actually part of a bigger picture - mass-energy equivalence. For any generic object, we can relate the energy of the object to its momentum and mass through this equivalence relation. For photons, m = 0 so it reduces down to E = pc. However for objects with mass, we get Einstein's famous E = mc^2 equation.
So what if we now go backwards, but starting with the FULL equivalence relation as the condition rather than the reduced one for photons? This way the wave equation we'll end up with SHOULD describe not just photons, but objects with mass (and possibly momentum too). That's exactly what the Schrodinger equation needs to do.
However, when we go backwards like this, the wave equation we end up with is actually the Klein-Gordon equation. This is a relativistic equation, whereas the Schrodinger isn't. So we do need to complete one more step, which is to reduce the Klein-Gordon equation into non-relativistic scenarios. To do this, we see what the equation will look like at low speeds. This is because relativistic effects show up at high speeds, and so if we set the object's speed v to be much less than c, we reduce the Klein-Gordon equation down to the Schrodinger equation!
This derivation is intuitive for those who know the mathematics because we can follow the math through, but it's also intuitive for those who don't know the math because the logical steps can be easily followed.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G Music
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - The Schrodinger Equation
1:25 - The Electromagnetic Wave Equation and Its Solutions
5:31 - Mass Energy Equivalence - Let's Go Backwards!
7:05 - The Klein-Gordon Equation and Relativity
8:23 - Finally, The Schrodinger Equation (Again)
Videos in Cards:
1:52 - Wave Equation: youtu.be/ub7lok-JQJE
7:39 - Klein-Gordon and Dirac Equations: youtu.be/J-neAb97aVU
Pierre Agostini, Ferenc Krausz, and Anne L'Huillier were each given a 1/3 share of the prize, and it's exciting to see that fairly new, experimental physics is being recognised on such a large scale.
The award was given for work on "attosecond physics", a fairly new area of physics studying things that happen very quickly (on the attosecond, or 10*(-18) second level). The Nobel Prize awardees have worked on generating pulses of (laser) light that are on the order of attoseconds in length.
This is impossible to do by simply switching on and off our laser, since doing so this quickly is mechanically impossible. Instead, scientists use the principle of superposition to generate resultant light waves that are formed of very short pulses of high amplitude.
To do so, they need to combine multiple light waves that are different from each other by a constant frequency difference. For example, light pulses can be combined by adding together waves that are 990 Hz, 1000 Hz, and 1010 Hz. These waves, which are separated from the next by a constant (10 Hz in this case) will combine together and interfere to generate regular pulses. But this will only happen if the phases and amplitudes of each combined wave are correct.
Not long after the invention of the laser, scientists managed to generate pulses on the order of microseconds in length. This quickly dropped to nanoseconds, then femtoseconds. But as time has passed, the femtosecond barrier has been difficult to break due to limits on both physical systems and our understanding of physics.
Our Nobel Prize winners have all worked on generating attosecond pulses of light, which finally broke the femtosecond barrier. One method is High Harmonic Generation (HHG), where a pulsed laser fired into a gas creates higher order harmonics that are equally spaced in frequency. Our winners showed that these harmonics could be forced to have the right phases and amplitudes such that, when combined, they created pulses on the order of attoseconds.
But what are these pulses even used for? One use is outlined in the Nobel Committee's description of why these scientists won the prize. They state that the pulses can be used to understand electron dynamics in matter.
One example of this that we see in the video is when an inner-shell electron is ejected from an atom. It leaves behind a hole, which can be filled by another electron from a higher shell. But this happens so fast that even femtosecond pulses are too fast to give us an exact picture of what's going on. We don't know if a single electron falls from a higher level to the lower one, or if all the electrons rearrange, or if something else entirely happens. We need attosecond pulses to give us more information about how electrons behave within atoms and molecules.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G Music
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Pulsed Lasers and Interference
1:38 - How to Make Pulses (and Make Them Shorter)
2:54 - The History of Pulsed Light
3:55 - A Breakthrough! High Harmonic Generation
4:29 - Applications: Electron Dynamics in Matter
5:35 - Conclusion
In this video, we take a look at the van der Waals Gas Equation - a brilliant upgrade to the Ideal Gas Equation, which uses sound logical arguments to improve on the Ideal Gas Model.
To understand the changes made by van der Waals, we start by understanding the Ideal Gas Equation.It tells us that the product of the pressure and volume of a gas is equal to the product of the amount of gas (in moles), the temperature of the gas (in kelvin), and the molar gas constant, R. To arrive at this equation, physicists had to make a couple of somewhat silly assumptions.
Firstly, they assumed that all gas particles were infinitesimally small, or in other words that they had no volume. This isn't realistic, as all gas particles have some real volume, albeit quite small. The ideal gas equation therefore only works when the GAS volume is much much bigger than the volume of the PARTICLES combined.
The second assumption of the Ideal Gas model is that there are no inter-particle interactions (such as due to electromagnetic forces between particles). The particles only interact when they collide with each other, and not by just passing close by to each other. Again this is unrealistic as electromagnetic forces can sometimes be quite strong between gas particles. So the Ideal Gas model only works for gases with very weak interparticle interactions.
Here's where van der Waals comes into the picture. He made two modifications to the Ideal Gas model so it would work in many more scenarios.
Firstly, he replaced the volume per mole term with (volume per mole - b). The quantity "b" is the molar volume of just the particles of the gas (i.e. not the space they occupy). By subtracting this from the space the occupy, we now account just for the available space BETWEEN molecules, and also encode into our mathematics the idea that these particles have actual volume that isn't zero.
Secondly, he replaced the pressure term with (pressure + a/(molar volume)^2). This term accounts for the reduced pressure experienced by the container of the gas, because the particles of the gas exert forces on each other even when they aren't colliding. The logic is that particles close to the centre of the container "pull in" particles near the walls, thus reducing the forces with which the particles near the walls can hit the walls. And the more particles there are, the more the pressure reduces. The quantity "a" is just the proportionality constant.
We also see that in specific scenarios (where the Ideal Gas model worked anyway) the van der Waals gas equation actually reduces to (or becomes) the Ideal Gas Equation - success!
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G Music
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Cards:
Ideal Gas Equation - youtu.be/At-kMCldv6c
Timestamps:
0:00 - Johannes Diderik van der Waals
1:40 - The Ideal Gas Equation and its Assumptions
4:15 - First Modification: Volume
6:35 - Second Modification: Pressure
9:50 - The van der Waals Gas Equation is Just... Better!
The Schrodinger Equation is famous, and rightly so. It's the governing equation of a theory called quantum mechanics. It can very accurately predict how quantum systems (i.e. very small systems) will behave through space and over time. The basic premise of it is that it adds together a system's kinetic and potential energies and equates this to the system's total energy. This is seemingly pretty common sense, but the Schrodinger Equation is "quantized", meaning measurements on the system only give very specific results. We can also never predict exactly which measurement outcome we will get, but only the probabilities of each possible outcome. The Schrodinger equation also has "measurement operators", which are the math equivalent of making a measurement on the system.
Importantly, the Schrodinger Equation is not relativistic. In other words, it does not account for the strange effects we see when relativity is accounted for. We know that when objects move at high speeds relative to each other, that they noticeably measure distances and times differently to each other. Because these effects are not accounted for, the Schrodinger Equation does not always accurately predict the behaviour of small systems that may be moving at high speeds. It also treats time as a universal variable (i.e. everybody measures time in the same way), which is not how relativity deals with what it calls "the fourth dimension".
To save quantum mechanics in these high-speed scenarios, we need to look at some other equations that are both quantised and relativistic. The first equation of this sort that we'll look at is known as the Klein-Gordon Equation. To get this equation we start with Einstein's famous mass-energy relation (E = mc^2). But in reality, we start with the full version of this equation which also involves momentum. Taking this full mass-energy equivalence relation, we can then quantise it and derive the Klein-Gordon Equation.
The Klein-Gordon Equation accurately predicts the behaviour of spin-0 particles. In other words, it does not account for spin. But it is quantum and relativistic. It also has a "psi" quantity in it just like the Schrodinger equation, but here "psi" is charge density, not probability density. This is because the Klein-Gordon Equation allows negative solutions for the square modulus of psi, which previously we interpreted as probability. It makes no sense to have negative probabilities, and instead this equation deals with the behaviours of particles with positive, negative, and zero charge.
To account for spin, then, we need to look at yet another equation. Remember, spin is angular momentum that is inherent to a particle (without it moving along a curved path or rotating). The equation that starts to account for spin is the very famous Dirac Equation. It's highly complicated, but can be essentially thought of as the square root of the Klein-Gordon Equation. It has four complex degrees of freedom in its "psi" quantity. The first two of these look like the quantum wave function "psi", but the remaining two encode details for systems that are quantum and also relativistic.
When Dirac came up with his equation, he realized that some potential solutions allowed for particles similar to the ones we know, but with the exact opposite charge. For example, electron-like particles with +1 unit of charge rather than -1 were allowed. Dirac thought this was initially a mistake, but we eventually found particles like this to exist! We now call them antiparticles, which make up antimatter. In other words, what was initially thought to be an accident of under-constrained mathematics, actually provided a wonderful prediction for phenomena never seen before!
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G Music
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Understanding the Schrodinger Equation
3:50 - Relativistic Quantum Mechanics
5:05 - The Klein-Gordon Equation
7:42 - The Dirac Equation
Videos in Cards:
1) youtu.be/BFTxP03H13k
2) youtu.be/DCrvanB2UWA
In the theory of General Relativity, the Einstein Field Equations is a tensor equation that governs the theory completely. It links together the distribution of stuff (e.g. mass, energy, momentum, and pressure) found in any region of spacetime that we want to consider, and the warping of the spacetime in that region as a result.
Basically, mass tells spacetime how to warp, and the warping of spacetime tells objects within it how to move. This is a simplification, but check out my more detailed video on the Einstein Field Equations for a more detailed description: youtu.be/FJnTItLVIqQ
The interesting thing we'll look at in this video is how to solve the Einstein Field Equations. A solution to these equations links together a realistic warping of spacetime with the distribution of stuff inside it that causes the warping. More accurately, a solution is in the form a metric tensor which describes the bending of spacetime and the geometries within it.
The first solution we look at is the Schwarzschild solution. It studies the shape of spacetime around a spherical object with mass. The mass and size of the sphere can be varied, and the solution still works. This solution was first discovered by Karl Schwarzschild only about a month after Einstein published his paper on General Relativity. It was also discovered by Johannes Droste not long after, with a more elegant method.
The Schwarzschild solution studies a non-rotating, uncharged sphere, so can be used to describe the spacetime around celestial bodies like the Earth and the Sun. Of course both these bodies are not perfect spheres and they are slowly rotating, but on a cosmological scale they are very approximately perfect, non-rotating, uncharged spheres.
The Schwarzschild solution also describes the spacetime around black holes. These are dense objects, where a lot of mass is packed into a very small region of space. The spacetime solution works outside the black hole, up until the event horizon. The solution also predicts what happens inside the black hole, but we have no way of knowing this since not even light can escape (to bring us information) once past the event horizon. Other solutions of the equation deal with rotating or charged black holes - check out my video on the Kerr solution here: youtu.be/kIbP2Sg8y18
The next solution we look at is the flat spacetime solution for an empty region of spacetime. This is a very important, but trivial, solution. It just says that spacetime is NOT warped when there is no mass to warp it - just as we'd expect in a steady-state case. But this is not the only solution that studies an empty region of spacetime.
Gravitational waves are another possible solution to the equation. They are a rippling of the spacetime fabric, transferring energy from the source, and they do not need mass to exist in the region through which they travel (though they are usually formed by interactions of massive objects).
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Einstein's Field Equations in General Relativity
2:26 - What Does It Mean to Solve Einstein's Field Equations?
3:29 - The Schwarzschild Solution (Black Holes!)
6:20 - The Flat Spacetime Solution
6:59 - Gravitational Waves!
James Clerk Maxwell is most well know for his four equations that completely describe everything in Classical Electromagnetism - they're known as Maxwell Equations. However, did you know that he has yet more equations named after him in another field of physics that is thermodynamics?
In this video we take a look at these "Maxwell Relations". We begin by looking at the First Law of Thermodynamics, which states that a system's internal energy, U, can change because of heat being transferred to the system or work being done by the system. These heat and work terms can also be described in terms of properties of the system such as temperature, entropy, pressure, and volume.
When we find this expression in terms of the four properties of the system, we realize that it's made up of partial derivatives. Therefore, we can use the rules of partial derivatives generally and apply them to this specific thermodynamic scenario. Doing this, we find a relationship between partial derivatives of internal energy, and the properties we mentioned earlier (temperature, entropy, etc).
Next, we can take second partial derivatives to find relationships between the properties themselves, that are not immediately obvious from just the physics of the situation. Instead, we've used math and physics together to come up with new, unintuitive relationships known as Maxwell Relations. In fact, these relations are better "Maxwell Equations" than the electromagnetic ones, as those four equations were actually devised by Oliver Heaviside after distilling many of Maxwell's actual equations down.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Maxwell's Equations of Classical Electromagnetism
1:07 - The First Law of Thermodynamics
2:10 - Big Thanks to Brilliant for Sponsoring This Video!
3:22 - Heat and Work
4:52 - Partial Derivatives and Some Fun Math
7:06 - Second Derivatives and Maxwell Relations
9:35 - When Internal Energy Doesn't Work
Useful Links to Understand This Video:
1) youtube.com/playlist?list=PLOlz9q28K2e6aNgl1zt1xccyy4Ofl3YAk
2) youtube.com/watch?v=bM4ykIumlss&t=4s
3) en.wikipedia.org/wiki/Heat
4) en.wikipedia.org/wiki/Partial_derivative
#maxwell #maxwellequations #physics #parthg
This video was sponsored by Brilliant #ad
Our most robust theory of physics so far seems to be #thermodynamics
Here are two simple assumptions behind statistical mechanics, the small-scale detailed description of thermodynamics. #statisticalmechanics #entropy
The Second Law of Thermodynamics essentially states that heat (or more precisely, thermal energy) cannot be transferred spontaneously from a colder object to a hotter object. Instead when two objects of different temperatures are brought into thermal contact, thermal energy will spontaneously flow from the hotter object to the colder one until they both reach equilibrium at some temperature between the two objects' initial temperatures. This is the Clausius description of the Law.
Statistical mechanics is the study of particles making up each system we study (such as a gas). This small-scale study allows us to make very precise predictions of how the system will behave. However, it is very difficult due to the often huge numbers of particles in each system. So instead we need to find ways to link the small scale statistical mechanics theory to the large scale thermodynamics theory. Thermodynamics is the study of heat and energy within systems on a large scale (such as an entire gas, liquid, or solid).
The first assumption of statistical mechanics is that each microstate for a given system (each possible energy arrangement of particles within the system) is equally like as all the other possible arrangements. This is the Law of Equal A-Priori Probabilities.
The second assumption is that each microstate corresponds to a large-scale property of the system, such as volume, pressure, or temperature. It also states that we should be able to link a measured property of a system with the weighted average of individual microstate properties over the measurement period.
By the way, Boltzmann discovered the equation linking omega (number of microstates) to entropy, and hence the understanding of entropy as disorder. Since the entropy of the universe increases over time, it is thanks to his work that we know how the universe will (probably) end - in a highly disordered state.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Useful Resources
1) My Entropy Video: youtu.be/mg0hueOyoAw
2) Detailing the Assumptions of Stat Mech: https://ocw.mit.edu/courses/3-012-fundamentals-of-materials-science-fall-2005/5c0bfa52fd617b90f85cfbd2e2d1ea78_lec21t.pdf
Timestamps:
0:00 - The Second Law of Thermodynamics and Entropy
3:04 - Sponsor Message - Check Out Brilliant.org in the Description
4:14 - Microstates of a System
6:03 - The First Assumption of Statistical Mechanics
7:50 - The Second Assumption of Statistical Mechanics
This video was sponsored by Brilliant. #ad
The 2005 Nobel Prize in Physics was awarded to Roy J. Glauber for his work on Quantum Optics. However, the decision to award the Prize to Glauber was controversial. Many physicists believed that E.C. George Sudarshan made just as many important contributions to the field, including ideas that Roy was eventually credited with (having initially criticized George's work, and then later agreeing with him).
A group of physicists wrote letters to the Swedish Academy (responsible for determining who should win the prize) to state that this was a grave miscarriage of justice, and to ask why George was not considered instead of Roy. The Swedish Academy never clarifies their entire decision making process until 50 years after the awarding of the prize, so we've got some time to wait. However they did mention that Roy published his research first, and that he had other contributions which counted towards the prize.
Additionally, Alfred Nobel's will clearly states that only three physics laureates can be chosen each year, with only two distinct pieces of work being recognized. The committee had already decided to honor two experimental physicists that year, so there was only one spot left - and that was given to Roy.
George (and others) felt that he had been hard done by. George asked "If you give a prize for a building, shouldn’t the fellow who built the first floor be given the prize before those who built the second floor"? He also wrote "No one has the right to take my discoveries and formulations and ascribe them to someone else!"
Suffice to say, although most people thought Roy was very deserving of the Nobel Prize for his work, that George should have been at least honored to the same degree. It's also important to note that most people do not believe that Roy stole George's work - after all, they communicated back and forth often, improving on each other's ideas. However the fact that Roy criticized George's ideas, then came out with a "p-representation" that was mathematically equivalent to George's "diagonal representation" left a bad taste in people's mouths, especially because this work was quoted as one of the reasons the Prize was awarded to Roy.
The work in question centers around coherent states of light. Basically, these are quantum mechanical states that use some understanding from classical electromagnetism, to describe how light behaves. Coherent states are the closest equivalent we can find in quantum mechanics to the light waves that we are so familiar with from high school physics. Although early quantum mechanics dealt with photons (particles) of light, coherent states showed how light WAVES could be represented.
In this video we understand the basics of quantum harmonic oscillators (systems with quadratic potential wells). If we treat the electromagnetic field as a quantum harmonic oscillator, then each of its allowed states (eigenstates) represents a different number of photons carrying energy through space. The lowest energy state is one which has zero photons - no light energy. The next one consists of 1 single photon. Then 2 photons, and so on.
Coherent states are quantum superpositions of ALL the possible photon states. In other words, from a quantum perspective, a light wave is a blend of states containing ALL possible number of photons. The light wave may be made up of 0, or 1, or 2, etc. photons. In fact before we measure it, it is made up of ALL of these possibilities. And once we measure the wave, we find that the system collapses into one possible state. For example we may find it's made up of 4 photons. This is a great example of wave-particle duality!
Coherent states were worked on by both Roy and George (initially proposed by Roy), and mathematical representations of these states were very importantly devised by George and Roy.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Videos in Cards:
1) youtu.be/ocJBIXua6zQ
2) youtu.be/j-neq1KhuPc
3) youtu.be/Is_QH3evpXw
Timestamps:
0:00 - Meet E.C. George Sudarshan and Roy Glauber
2:07 - Big Thanks to Brilliant, Check Out their Courses in the Description
3:15 - Nobel Prize Controversy and Backlash
4:56 - Why Roy but Not George?
5:50 - Classical and Quantum Harmonic Oscillators
9:09 - Electromagnetic Waves and Coherent States of Light
#nobelprize #physics #nobelprizewinners
This video was sponsored by Brilliant #ad
The first 100 to sign up will get their first month of the subscription covered by Wren for free!
Emmy Noether was a brilliant mathematician, who was described by Einstein as "the most significant creative mathematical genius thus far produced since the higher education of women began". In fact, she may have been one of the most important mathematicians of all time when it comes to changing physics forever. She discovered a theorem that links together seemingly unrelated concepts that are very fundamental to our understanding of physics.
Noether's theorem, (technically Noether's first theorem) states that there is an inherent link between certain kinds of symmetry within the universe, and conservation laws (such as conservation of momentum, energy, and angular momentum). If one exists, then so must the other.
In this video, we start by understanding what we mean by a symmetry. Specifically, this refers to unchanging behaviours of any system that we study even when a specific variable is changed. For example, if we move a ball to a different position in space, its behaviour does not suddenly change. This is "translational symmetry". We also look at "temporal symmetry" (symmetry over time) and "rotational symmetry" (symmetry over angular displacement). Basically, a system's behaviours do not inherently change, and these symmetries exist, because the laws of physics stay the same regardless of position, time, or angle!
Noether's theorem states that if such a symmetry exists, then there HAS to be a conservation law that corresponds to it. Conservation of momentum comes about because of translational symmetry. Conservation of energy comes about because of temporal symmetry. And conservation of angular momentum comes about because of rotational symmetry. So this possibly gives us a reason as to WHY these conservation laws exist in the first place. But how do we know that symmetries must have an associated conservation law?
To understand this, we take a look at the Euler-Lagrange equation. This allows us to use some basic Lagrangian mechanics to understand how a system changes over time. One special case of the Euler-Lagrange equation can be shown to be equivalent to Newton's Second Law of Motion - the force on the system being equal to its rate of change of momentum. So if the force exerted on the overall system is zero, then its momentum is constant (or conserved)! This also applies to other types of symmetry and conservation law through the use of the Euler-Lagrange equations.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Emmy Noether
0:40 - Noether's Theorem: Symmetries
3:33 - Check out Wren to Calculate Your Carbon Footprint!
5:10 - Symmetries and Conservation Laws
6:28 - Lagrangian Mechanics
7:28 - The Euler-Lagrange Equation
(My video on Lagrangian Mechanics and the Euler-Lagrange Equation: youtu.be/KpLno70oYHE)
#physics #scientist #mathematics
#ad this video was sponsored by Wren!
This is Einstein's final contribution to physics... and unfortunately it was left unfinished.
Also, a huge thanks to @ChrisPattisonCosmo for working with me on this video. You need to go check out his channel now if you haven't seen it already. Also, he did a video looking at Einstein's blackboard in more detail. Check it out here: youtu.be/AB_RyZzGFEg
A photographer recorded forever the final work of Albert Einstein when he took a photo of Einstein's office on the day of his death. This photo was printed in Time magazine, and is hugely famous for obvious reasons. In this video, we try to understand what exactly Einstein was working on.
By the types of markings and mathematics found on the board, we can understand that the mathematics is that of General Relativity, Einstein's theory that best describes gravity and the universe on a large scale. We try and understand a few specific elements seen on the blackboard.
Firstly, we see that the metric tensor (or just "metric" for short) features quite a lot on the blackboard. The metric is used to describe the curvature of any region of spacetime that we are studying. Basically, it helps us to calculate distances between two points within spacetime depending on how the fabric of reality is warped. The general metric is described by the letter "g", and it has two subscripts because it is a "rank 2" tensor. We also see the greek letter "eta" used to represent the metric, but this is only in the special case where we are studying a "flat" (or curvature-free) spacetime.
Next, we understand that Einstein was trying to write the metric tensor for a generic spacetime in terms of "tetrads". Tetrads are simply vectors that can be defined at each point in spacetime, which point in the direction that each coordinate increases. In flat spacetime, they always point in the same directions, but in curved spacetimes their directions may change. Tetrads are useful for any observers that are actually within the spacetime being studied, whereas the metric is good for taking an overview or general look at the spacetime as a whole.
On the surface, it may seem useful to write a rank 2 tensor (metric) in terms of vectors or rank 1 tensors (tetrads), since tetrads are more simple entities. However, Einstein also showed on his blackboard that the number of degrees of freedom are exactly the same regardless of how we write general relativity (but just packaged up in different places).
Degrees of freedom refer to how many variables (or coordinates) can change in any system we're studying. For example, for a ball moving through the air we need 3 coordinates - usually x, y, z. Hence, there are 3 degrees of freedom. But for a ball moving on a table, we only need x and y, since its height does not change. Hence this system of a ball on a table only has 2 degrees of freedom.
Finally, we understand that Einstein designed a convention for writing sums in a short way when working with general relativity. This is known as the "Einstein Summation Convention". It helps us not have to write long and boring sigma (sum) signs, and just get on with the maths instead. However, we see on the blackboard that Einstein did not use the summation convention (nor always put in the required number of subscripts for tensors). This may have been because he was explaining his ideas and theories to someone else, and hence erasing parts of his explanations that were no longer relevant.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Albert Einstein's Office on the Day of His Death
0:37 - The Metric Tensor Explained
2:44 - Big Thanks to BRILLIANT. Check Our Their Brilliant Lessons Below!
4:01 - Chris Explains Flat Spacetime
6:25 - The Tetrad Formalism
8:55 - Q: Why Was Einstein Working On This? A: Degrees of Freedom
11:54 - Einstein Ruining His Own Summation Convention
13:55 - To Sum Up
Videos in Cards:
1) youtube.com/watch?v=l8UUlOOuF_g
2) youtube.com/watch?v=Ujvy2-o1I9c
3) youtube.com/watch?v=FJnTItLVIqQ
#einstein #physics #relativity
This video was sponsored by Brilliant #ad
The first 100 to sign up will get their first month of the subscription covered by Wren for free!
Here's what you need in order to build a quantum computer: a bunch of qubits, and a way of keeping them all entangled and stable without decohering. But what does all of this even mean?
In this video, we start by understanding that classical computers are made from classical bits, or binary digits. These are basically switches that can be found in one of two positions. These positions can be labelled "0" and "1", or "TRUE" and "FALSE", or "YES" and "NO". The switches are physically made from transistors arranged in circuits within our computer in complicated and interesting ways.
Multiple bits together can be used to store information, using binary code. A simple version of this would be to store letters of the alphabet. Say our code was that 00001 = A, 00010 = B, 00011 = C, and so on. The positions of the transistor bits could be changed to reflect this. In addition to this, the changes in positions of these switches could be used to conduct computations and other functions a computer can undertake.
Now let's take our classical computer and make it quantum instead. To do this, we ditch our classical bits, and instead use quantum bits, or "qubits" for short. Qubits, when measured, can be found in either the "0" or the "1" state. However between measurements, the qubits can be in a "superposition" or blend of the two possible states. Another way of thinking about this is that the qubits can oscillate between the two possible states, which means at some points in time they will be between the two states. If we make a measurement when a qubit is in this superposition, then it will immediately flip to one of the possible measurement states. The probability with which it will flip to either one is given by how much of each state was "mixed" into the superposition.
The state of each qubit can be written using a wave function. The overall state can be written as a sum of the "0" and "1" states. There is also a number / amplitude at the front of each possible measurement state. If we find the square modulus of this number for a given state, we calculate the probability of finding our system in one of the possible result states.
The reason we need to take the square modulus is because the multipliers for each measurement state can actually be positive or negative, and even real or imaginary. In other words, these numbers are not restricted in any way and can even be complex. Complex numbers are formed by summing a real and an imaginary number. And naturally, if we find the square modulus of a complex number, we find a real, positive number. This is great because the square modulus represents a probability, which must be positive and real. The multipliers will also be essential for visualizing our qubits and their behavior in the next video within this mini-series!
Lastly, we look at how qubits are made in real life. We see two examples of two-state systems. Firstly, we study an atom with a single electron that can be found in one of two energy levels. Secondly, we look at a single electron, which has two spin states - spin up and spin down. These can also be relabeled as "0" and "1", meaning an electron can behave as a qubit. The difficulty in making quantum computers is in ensuring multiple qubits remain entangled together, and do not decohere due to external influences. This usually involves huge energy costs in cooling, and highly specialized environments.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Classical Computers and Bits
1:25 - Quantum Bits (Qubits)
2:16 - Huge Thanks to Wren for Sponsoring This Video!
4:00 - Understanding Qubit Math (Wave Functions)
6:27 - Imaginary and Complex Numbers
9:10 - Real Life Examples of Qubits
#ad This video is sponsored by Wren.
Here's a talk I did recently (huge thanks to Reading School for inviting me) discussing this rather interesting concept. We started by considering the structure of atoms. We know that each atom has protons and neutrons in a nucleus, and electrons surrounding this nucleus. The electrons are arranged in shells and orbitals. But why is that?
To answer that, we need to look at quantum mechanics. Quantum mechanics is basically the study of very small objects, such as particles that make up atoms. And because quantum mechanics is not very intuitive, it says that particles behave in rather strange ways. When we do an experiment to find where a particle is, it's not always going to be where we expect it. However we can work out the probability that we'll find our particle at a given point in space each time we are about to try and measure its position. So instead of working with pesky, difficult to track particles, we work with a wave function.
A wave function is a mathematical function that changes smoothly over time, and tells us something about our particle. Most commonly, it tells us the probability of us finding our particle at different points in space. To find this probability, we square our wave function. This means that the square of the wave function is a directly measurable quantity, since we can repeat the experiment and work out the probability of each experimental result for a future experiment. So why deal with the wave function at all, and why not with the square of the wave function? Well that's because the wave function itself contains information that is lost when squaring.
We also look at the wave functions of multi-particle systems. We understand how the probability of each experimental result changes if we swap two particles. For identical particles, the probability does not change because we cannot tell them apart. If this is true for all possible experimental results, then these particles are said to be "indistinguishable". There are two flavors of indistinguishable particles: bosons and fermions.
Electrons happen to be fermions. We see how two fermions with the same spin state can never be found in the same orbital state. In other words, for every orbital in an atom, there can only be two electrons in it - one with spin up, and one with spin down. This is known as the Pauli Exclusion Principle. And it's because of this principle that atoms are arranged the way they are - with electrons being found in shells and orbitals (rather than all being in the same low-energy state).
And it's the atomic structure that enables basically all of chemistry to occur through covalent and ionic bonding, and so on. Which means that the universe as we know it, made up of atoms, would not exist if electrons were not indistinguishable fermions. Neat right?
I really enjoyed giving this talk, pitching it at a level that I felt comfortable explaining to a live audience. This inevitably meant cutting out on some not-very-important but technically correct details. So please let me know if I cut out too much! Thanks again to Reading for inviting me, and to the wonderful students who were so attentive and asked genuinely insightful questions.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G Music
CRICKET PAGE (TikTok, Instagram, YouTube) - @cricketinaminute
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
#goldenrule #physics #fermisgoldenrule #goldenruleofphysics
Fermi's Golden Rule is mathematical rule within the theory of quantum mechanics. It is used to calculate the transition probability between any two quantum states. In simpler terms, when we are studying systems with more than one possible state, like atoms which have multiple energy levels, then the Golden Rule formula can tell us how likely an electron is to transition from one state to another at any point in time. But the situation is a little bit more complicated than that.
First, we see that a system's "allowed" energy levels can be calculated using the Schrodinger equation. This is done by inputting information about the system, such as its kinetic and potential energies, into the equation, and then solving for the "allowed" wave functions or eigenstates.
These are the energy levels in which our system will be found whenever we make a measurement on it. For example, we will find each electron in one of the shells around the nucleus. However if the system is "perturbed" slightly, such as by a negatively charged electron passing relatively far away from the atom, then the allowed energy levels of the atom will also change.
This is because the negatively charged electron will repel all the electrons in the atom, and attract all the protons in the nucleus. Thus, the potential energies we used to calculate the allowed energy levels of the atom will have changed, and hence the allowed energy levels themselves will have changed.
When this is the case, however, the electrons in the atom are still in the "old" energy levels. And they will need to transition into the new energy levels. So which exact level will each electron transition into?
Well, each electron could technically transition into any one of the new levels. However, there are two factors that affect the likelihood of a single electron to transition into a new state.
Firstly, the "coupling" between the old state of an electron, and the new state we're studying. This is given by factors such as how close in energy the old and the new state are. The closer in energy, the strong the coupling, and the more likely our electron is to transition into this new state.
Secondly, the transition probability also depends on the "density of states" around the new energy level. This means that states which are surrounded by other states very close in energy are more likely to receive the electron, because there are more states for the electron to transition into in the vicinity of that energy.
Combining these two factors, we can figure out the probability per unit time that an electron will transition into a particular state. This is exactly what Fermi's Golden Rule studies and gives us a formula for.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Energy Transitions and the Schrodinger Equation
2:42 - Big Thanks to Shortform for Sponsoring This Video!
4:39 - Changing the Energies in Our System
6:28 - Fermi's Golden Rule for Transitions
This video was sponsored by Shortform #ad
In this video, we take a look at normal, "total derivatives" as well as "partial derivatives". We start by understanding that a total derivative is used to measure the rate of change of one quantity with respect to another, even if that change is not constant with the second quantity. This is a very basic principle in calculus that was worked on by both Leibniz and Newton.
The example given here is that of a car moving along a road. Even if the car does not move equal distances in equal time intervals, we can calculate its velocity at every point in time if we are able to calculate the total derivative of the car's displacement (position) with respect to time. In essence, the total derivative measures the rate of change of displacement with respect to time.
However, in some cases there are quantities that depend on more than one variable. In this video we look at the height of a surface sitting above the x-y plane, and the height at any point along the surface depends on both the value of x, and the value of y, at that point. This means we have a quantity h (representing the height) that is dependent on two variables - x and y.
However, we may want to measure simply how the height changes with the change in one of the variables, without accounting for its change due to the other variable. This is where our partial derivatives come in. Firstly worth noting that the letter d's used to represent normal derivatives become curly d's if we want to represent partial derivatives.
The partial derivative dh/dx (for example) gives us the rate of change in height of the surface, as we move along the x direction, for a constant value of y. In other words, we can find the gradient of the surface and how this changes over x, having chosen a single value of y that we can move along. Similarly, partial dh/dy shows how the height of the surface changes as we move along the y direction at a constant value of x. In each case, we can choose the constant value of the variable(s) held constant and the formula for the partial derivative will account for this.
This is different to the total derivatives dh/dx and dh/dy because the total derivatives actually account for any interdependencies between x and y too - for example if y was a function of x then total dh/dx would be different to partial dh/dx.
Partial derivatives are used in many different fundamental physics equations. In this video we look at a few different examples - the Classical Wave equation, the Schrodinger equation, the Heat equation, and the Euler-Lagrange equation. Each of these uses partial derivatives to represent relationships between quantities that may be dependent on multiple variables, but that we only want to study one variable's dependence on. In other words, each of these equations is a partial differential equation.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Total (Normal) Derivatives
4:47 - Partial Derivatives and the Curly D's
9:22 - Fundamental Physics Equations Using Partial Derivatives
Videos Linked in the Cards:
1) youtu.be/KpLno70oYHE
2) youtu.be/ChrFvXsnWk4
In classical physics, as well as in quantum mechanics, "energy" is a very useful mathematical concept that allows us to predict how a system will behave in different situations. This is primarily done through the use of the Law of Conservation of Energy. In classical physics it is important, therefore, that "energy" is a real, non-negative quantity. But then why does the total energy operator in quantum mechanics have the imaginary number in it?
In quantum mechanics, we often deal with a system's "wave function" - a complete mathematical description of the system, which allows us to calculate the probabilities of getting different experimental results if we were to make a measurement on the system. This wave function has both real and imaginary parts.
If we wanted to measure the energy of the system we are studying, the theoretical equivalent of that is "applying" (i.e. premultiplying) an operator to the wave function of the system. To measure a system's position, we apply the position operator. To measure its energy, we apply the total energy operator. The result of applying this operator, in a mathematical sense, is an eigenvalue (a REAL value) multiplied by the original wave function of the system (assuming the measurement does not change the system's state).
The real eigenvalue is the measured quantity. If we applied the total energy operator to our system, then the real eigenvalue is the total energy of the system. In other words then, the energy of our system is NOT imaginary - it is very much real. However, the total energy OPERATOR, the thing we apply to to the wave function in order to "make our measurement", is not real. This being imaginary is not a problem though, as it essentially "cancels" out the imaginary parts of the wave function in order to give us a real-valued energy as the result of the experiment.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Timestamps:
0:00 - Energy in Classical and Quantum Physics
2:55 - Quantum Wave Functions and Energy Operators
Around the time quantum mechanics was gaining steam as a way to describe the universe on the smallest scales, there were more and more questions cropping up about the meaning behind the theory. Although it made near-perfect mathematical predictions about what should happen in any given scenario and experiment, it went against a lot of theories that came before it (classical physics) in terms of the assumptions and implications of the theory.
In classical physics, if you get enough information about a system, you can exactly predict how it should behave at a later point in time. For example, if you know a particle's position and speed at a given point in time, you can work out where to find it some time later. This prediction would also work every single time we repeated the experiment of measuring the particle's position at a later point in time.
In quantum mechanics, however, we can get different experimental results for repeating the exact same experiment multiple times. And before each experiment, the only thing we can do is predict the probability of getting each possible measurement result (rather than predicting the exact result we'd get). Before the measurement, the system is in a blend, or superposition, of all possible measurement results. And upon doing a measurement, the wave function "collapses" into the single measurement result we find.
All of this goes against "common sense" and also classical physics. More importantly, this goes against "determinism", the idea that everything follows a set of rules that can exactly predict the result of an experiment given enough knowledge and information about the system.
Einstein, Podolsky, and Rosen (EPR) didn't like this. They used the logic employed by quantum mechanics to try and come up with a logical inconsistency. They studied the behavior (theoretically) of a pair of "quantum entangled" particles, separated by a large distance. Since the particles were entangled, making a measurement on one of them immediately also gave us information about the state of the other. But if quantum mechanics was right about the system being in a superposition before the measurement, and then collapsing right after it, then how did the second particle "know" when the measurement had been made?
They reasoned that the unmeasured particle, in order to obey other laws of physics, would have to instantaneously collapse into the right state (i.e. as soon as the first particle was measured). This went against the idea known as "locality", which said that information can only be communicated between two points of space as quickly as light could travel between them. "Instantaneous" collapse, after all, did occur faster than light could travel between the particles.
Therefore, they showed that the conventional interpretation of quantum mechanics broke both determinism and locality. And this was a problem because both classical physics and Einstein's theories of Relativity were heavily reliant on both principles. So EPR suggested an alternative explanation, known as a "hidden variable" theory.
They suggested that hidden variables, which we would never have access to, were engrained within the particles and told the particle what state to be in at any time and position. This way, we would just "catch" the particle in a particular state when we did a measurement. In other words, the hidden variable determined (deterministically) what state the particles should be in (i.e. no random wave function collapse), and since the variable was engrained into both particles, there was no faster-than-light communication.
Doing an experiment to explain the difference between these two hypotheses (hidden variable vs. quantum) was difficult, until John Bell came along and showed that correlations between results of MULTIPLE such measurements were expected to be different between the hidden variable and quantum theories.
This is where our Nobel Prize winners come in. They each worked on improving Bell's theorem so it could be experimentally verified, did the experiments, closed loopholes, and further developed quantum information theories to study ideas like quantum teleportation!
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
INSTAGRAM - @parthvlogs
PATREON - patreon.com/parthg
MUSIC CHANNEL - Parth G's Shenanigans
Here are some affiliate links for things I use!
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
www.nobelprize.org/prizes/physics/2022/summary
wikipedia.org/wiki/EPR_paradox
0:00 - Quantum Physics Basics
6:27 - EPR Paradox
10:53 - Hidden Variables, Bell's Theorem, Nobel Prize
Over the years, many scientists have been confident that physics is almost complete, and that humanity was just a small number of discoveries away from understanding everything in the universe. Usually though, this confidence was short-lived before a new discovery or leap in understanding completely turned everything on its head, and opened up brand new areas of physics. What do you think - will physics ever be complete?
In this video, I wanted to share a fun little story about how physics has seemed close to being "complete", and then something new made us realize that we were quite far from the end after all. There are a lot of great discussions about big paradigm shifts (such as the advent of general relativity or quantum mechanics) that have made us understand how much more physics there is to discover. But I wanted to share with you a much smaller, yet in my opinion clearer, case.
When we first learned about atomic structure, and how electrons were distributed into "shells" around the nucleus, we thought that this was the extent of the structure to the atom. We thought that electrons were found in shells (labelled with n, the "principal quantum number") with increasing energy further away from the nucleus. And this was all based on the principle that the lowest energy shells filled first, followed by higher energy ones.
However, when we found out how electrons actually filled up shells, we realized that they did not simply start with the n = 1 shell and then n = 2 shell and so on. In some cases, electrons partially filled a shell (such as n = 3), then started filling the next shell up (n = 4), and then went on to complete the filling of the n = 3 state. This meant that either shells did not fill up in order of increasing energy, or there was more to electronic structure than we understood at the time.
The correct answer was the latter. The so called "shells" were actually divided up into further energy levels, with the spacing between these energy levels being much smaller than the energy level spacing of the original shells. In some cases, there would be an overlap in energy between the highest energy "subshell" of one shell, and the lowest energy subshell of the next shell up. This was known as the fine structure of the atom, and it turned out that electrons did indeed fill energy levels from lowest to highest (but the shells were split into smaller subshell energy levels).
At a later point, we found out that even the subshells were divided into much more closely spaced energy levels - the hyperfine structure. And in our latest quantum mechanical theory, we also understood what caused this further splitting. But what if the hyperfine structure was split into further closely packed energy levels? What if there's another layer of energy level splitting that even our best instruments are not good enough to measure yet? After all, we'd seen three increasingly fine levels of splitting when we initially thought there was only one.
Now none of this is true evidence that physics will never be complete. After all, just because it has happened to us a couple of times already, does not mean it will continue to happen. But we can wonder this question a lot. And the aim of this video was to share a story about physics WE ALREADY THOUGHT WE UNDERSTOOD, coming up with new ways to surprise us. This doesn't even account for new physical theories that have not yet been discovered.
Thanks for watching, please do check out my links:
MERCH - parth-gs-merch-stand.creator-spring.com
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Introduction to Elementary Particles (Griffiths) - the book used in this video: amzn.to/3I3ld71
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Video linked in cards (and one that discusses differences in electron numbers per shell): www.youtube.com/watch?v=EhIbCrl1pGw
The Schrodinger equation is the governing equation of quantum mechanics, and determines the relationship between a system, its surroundings, and a system's wave function.
The wave function contains all the information we can know about a system, such as the probabilities of finding a particle within the system in different regions of space. Specifically, the square modulus of the wave function is related to any measurement probability.
The Schrodinger equation simplifies down to "kinetic energy + potential energy = total energy", but using the language and quantities defined by the theory of quantum mechanics.
We can set up the Schrodinger equation for any system that we are studying, simply by adding together all the kinetic energies and potential energies within the system. In this video, we see how to do that for a hydrogen atom and a helium atom. Then we "solve" the Schrodinger equation by finding the allowed wave functions.
With a hydrogen atom, we merely need to account for the potential energy that comes about due to the electron-proton interaction. Since they are both charged particles, they exert electrostatic forces on each other, and hence there is a potential energy between them.
In a helium atom (2 protons, 2 neutrons, 2 electrons), things become a bit more complicated. To make things simple, we make 3 assumptions: (1) the nucleus is stationary, since it's much more massive than the electrons, (2) the nucleus behaves as one single object in order to avoid accounting for the interactions between the particles making up the nucleus, and (3) the atom is isolated and does not interact with anything outside it.
These simplifications allow us to much more easily build the Schrodinger equation for our helium atom. We only need to account for the kinetic energies of the electrons since we assume the nucleus is stationary. We also only need to account for 3 sources of potential energy. Two of these are the interactions between the nucleus and the two electrons, and the third is the electron-electron interaction. We would have account for many more terms if we did not use the simplifications outlined above.
At this point, we have a differential equation that we can solve in order to find the allowed wave functions. But this equation is extremely difficult to solve analytically, and we don't have many techniques to do it. We instead have to resort to further simplifications or the use of computers to find "brute force" solutions. This last method is basically trial and error but with fairly educated guesses.
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Introduction to Elementary Particles (Griffiths) - the book used in this video: amzn.to/3I3ld71
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Cards linked in this video:
1) youtu.be/w9Kyz5y_TPw
2) youtu.be/SqACdvxJsiM
3) youtu.be/BFTxP03H13k
4) youtu.be/j0zghSW6loQ
Timestamps:
0:00 - What Does the Schrodinger Equation Mean, and How Do We Solve It?
3:29 - Building the Schrodinger Equation for the Hydrogen Atom
5:05 - A Simplified Model of the Helium Atom
7:06 - Building the Schrodinger Equation for a Simplified Helium Atom
10:07 - Solving the Schrodinger Equation?
The branch of science and math that deals with such randomness is known as "stochastics". The kind of randomness we'll focus on is the kind where we can't predict the result of an experiment or measurement before we make it, but crucially we do know the probability of each possible result before we make the measurement, and this does not change over time. An example is tossing a coin. A fair coin has a 50% chance of landing on either heads or tails and this does not change regardless of the results of any previous coin flips.
We understand the difference between perceived randomness and true randomness. A coin toss is perceived to be random because we don't have any way to collect all the data we would need to predict the outcome of a coin toss. A quantum measurement is considered truly random because it relies on the collapse of the wave function.
We'll take a look at a random walk model, which works on the principle that a particle is allowed to move a fixed distance every unit time, but in varying (random) directions. In 1D, the particle can move either up or down (for example), in 2D along any direction on a flat surface, and in 3D along any direction at all.
We can use a spinner to model the "randomness" in the random walk. The direction that the spinner lands on, will determine which direction the particle moves for a given unit of time. The spinner can be spun repeatedly to model multiple steps in the random walk. The spinner has the kind of randomness we described at the beginning of this description.
The random walk model can be used to describe real life systems, such as particles undergoing Brownian motion. This involves particles jiggling around rather than staying still, due to collisions with smaller particles that are too small to be visible.
We can model randomness using a random number generator (RNG). We can get it to generate random numbers between 0 and 1, using a uniform distribution. This simple technique can be used to model both discrete and continuous systems, as well as uniform and non-uniform probability systems.
We see how to use the RNG numbers to model the random walk, by multiplying the random numbers by 360 to give the angle at which the particle moves for each time step. We also see how the RNG numbers can be used to model an unfair die, by assigning ranges between 0 and 1 to each possible die result based on the unfairness of the die.
At the moment RNGs aren't truly / perfectly random, but maybe one day they will be! And they are the key to harnessing the power of randomness.
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Introduction to Elementary Particles (Griffiths) - the book used in this video: amzn.to/3I3ld71
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Videos linked in cards:
1) youtu.be/Is_QH3evpXw
2) youtu.be/fBR5HQ-Ja10
Brownian Motion: en.wikipedia.org/wiki/Brownian_motion
Timestamps:
0:00 - The Randomness Around Us
0:49 - What Do We Mean by Randomness?
2:38 - True vs. Perceived Randomness
4:27 - The Random Walk Model
6:24 - Brownian Motion: a Real Life Random Walk
9:50 - Random Number Generator: Modelling Reality
13:42 - The Problems with an RNG
We begin by considering a mass-spring system that can behave as a simple harmonic oscillator. When the mass is pulled on or pushed, thus extending or compressing the spring, the spring exerts a restorative force back in the direction of equilibrium. In other words, the spring exerts a force in order to go back to its natural length.
Once the spring is stretched and then released, the mass-spring system undergoes Simple Harmonic Motion - the spring exerts a force on the mass to bring the spring back to its natural length. The system oscillates back and forth symmetrically about the equilibrium position. Simple Harmonic Motion is defined by an object experiencing a force (technically acceleration) that is proportional to the object's displacement, and in the opposite direction to the displacement. We look at how this can be described by a simple differential equation by equating the net force on the system (F = ma) with the force exerted by the spring. We also see how the acceleration, a, of the system can be described as the second time derivative of the system's displacement or position.
Next, we look at damping, i.e. a force that resists the motion of the system. The simplest way to model this is that the damping force is proportional (and in the opposite direction) to the velocity of the system. We add this term to our differential equation, and convert the velocity term into the first derivative of displacement with time.
Finally, we add a driving force. This driving force can be any external force we can exert on the system, but we choose a sinusoidal force. The frequency with which the driving force varies is arbitrary. If it matches the system's "natural" frequency then we see "resonance", where the oscillation gets bigger and bigger in amplitude.
At this point we have our final differential equation - a driven, damped, harmonic oscillator. We take this equation and modify a few terms (though the equation still keeps the same form). And after doing this, we find an equation that can be used to describe a different kind of oscillator entirely - an electric oscillator! This time, the stuff that's oscillating is a series of charged particles within an electric circuit! The derivatives are now of charge with respect to time, rather than position.
The driving term now describes the voltage provided by the power source connected to the circuit we are studying (which is a series RLC circuit with an alternating voltage source).
The original "damping" force term now describes the resistive behavior of the circuit. In other words, any resistor in the circuit acts like the damping fluid for the mechanical (mass-spring) oscillator. The term in our equation describing the resistive effects actually is equivalent to Ohm's law (voltage across resistor = current x resistance).
The original "net force" term now describes how inductors behave in the circuit. The voltage across any inductor is given by the inductance multiplied by the rate of change of current, which is also the second derivative of the charge w.r.t. time.
The original "spring force" term is now given by the charge divided by the total capacitance in the circuit. In other words the capacitance behaves almost like an "inverse spring".
And so the equation for the circuit basically says that the voltage supplied by the power source is equal to the sum of the voltages across the inductor, resistor, and capacitor. But it also looks at how the system behaves like a driven, damped, harmonic oscillator!
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Introduction to Elementary Particles (Griffiths) - the book used in this video: amzn.to/3I3ld71
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
My video about Ohm's Law: youtu.be/Zao9JV1BLg8
Extra Reading:
Voltage - en.wikipedia.org/wiki/Voltage
Timestamps:
0:00 - A Beautiful Parallel Between Two Areas of Physics
0:29 - Mass-Spring (Mechanical) Simple Harmonic Oscillator
4:38 - Damping the Mechanical Oscillator
6:33 - Driving the Mechanical Oscillator
8:40 - The Oscillator Equation and the Electrical Oscillator
9:16 - Charge, Current, and Electrical Stuff
11:14 - Driving the Electrical Oscillator (Power Source)
11:42 - Damping the Electrical Oscillator (Resistor)
12:52 - Inductance and Capacitance Terms
13:51 - Mechanical vs. Electrical Oscillator
In this video, we're looking at how there are two sides to every Maxwell, equation, and therefore there are two ways of understanding each of Maxwell's equations.
Maxwell's equations of electromagnetism fall under the umbrella of classical physics, and describe how electric and magnetic fields are allowed to behave within our universe (assuming the equations are correct of course). Electric and magnetic fields show how electrically charged, and magnetic objects respectively, exert forces on each other.
Each of Maxwell's equations is a differential equation that can be written in one of two forms - the differential form, and the integral form. In this video, we look at two of these equations, and how each of them has two variations. We begin by studying the first Maxwell equation, which says (in the differential form) that the divergence of any magnetic field is always equal to zero.
The physical interpretation of the above statement is that if we consider any closed volume of space, the net magnetic field passing either in or out of the region must always be zero. We can never have a scenario where more magnetic field enters or leaves any closed region of space. The divergence of the magnetic field simply measures how much field is entering or leaving the volume overall. And this must be equal to zero.
Conversely, this same equation can be written in integral from (i.e. from a slightly different perspective). The integral equation says that the integral of B.dS is equal to zero. B is once again the magnetic field, and dS is a small element of the surface surrounding the volume discussed above. This method breaks up the outer surface covering the volume into very small pieces, counts the amount of magnetic field passing the surface element, and then adds up the contributions from all the elements making up the surface. This addition of contributions is given by the surface integral over the closed surface. In other words, the integral form of this Maxwell equation states the same thing as the differential form but looks at it from a slightly different perspective. Note: the integral must be a closed integral i.e. there should be no holes or breaks in the surface.
We also see a similar sort of thing with the second Maxwell equation, which looks at the behavior of electric fields. The differential form states that the divergence of the electric field is equal to a charge density divided by epsilon nought, the permittivity of free space. This therefore says that for any closed volume, the net amount of field entering or leaving the volume is directly related to the density of charge enclosed within the volume. Therefore if the net charge in the volume is zero, then the net field entering or leaving it is also zero. If the net charge is positive, the divergence is greater than zero, and if the net charge is negative, the divergence is less than zero.
The integral equation states that the sum of the electric field contributions to each of the small elements making up the area surrounding the volume is equal to the total charge enclosed within the surface, divided by epsilon nought. So once again this is looking at the same scenario from a slightly different perspective.
Each Maxwell equation has these two ways of writing it, and one can easily convert from the differential form to the integral form if one knows differential calculus. It is generally simple to move between these forms, and we can use whichever one is mathematically most convenient to us at any given time.
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Introduction to Elementary Particles (Griffiths) - the book used in this video: amzn.to/3I3ld71
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Cards linked to this video:
1) youtu.be/LYs4clrQUjE
2) youtu.be/0jW74lrpeM0
3) youtu.be/hI4yTE8WT88
4) youtu.be/pTMh1yyqVC8
Timestamps:
0:00 - Electric and Magnetic Fields Described by Maxwell's Equations
3:15 - Sponsor Message: NordVPN
5:17 - Differential Form of the First Maxwell Equation
8:09 - Integral Form of the First Maxwell Equation
9:47 - Differential Form of the Second Maxwell Equation
11:05 - Integral Form of the Second Maxwell Equation
#maxwell #electromagnetism #maxwellequations
This video was sponsored by Nord VPN #ad
Yes, I set myself an interesting challenge. Although I studied physics at university, and even focused on the study of small things, I never actually learnt any particle physics. So, many years after graduating, I decided to change that.
In order to ensure I didn't end up procrastinating, I only gave myself two 20 minute chunks of learning per day. This way I would be forced to focus, and to skip past any overly complicated bits of particle physics (and ponder over them in other spare time, rather than while learning). I used "Introduction to Elementary Particles" by Griffiths, because it was well reviewed online, and I also had the internet to help me if the textbook ever wasn't clear.
I didn't think I could learn all of particle physics in satisfactory way within a week, but the truth is that this challenge allowed me to do some learning, which is more than what I would have done had I not undertaken this challenge. Providing this structure to my timetable allowed me to enjoy the learning I was missing so much.
So what did I actually learn? Well, the book first discussed the production and detection of particles. Particles can be produced in a few different ways. We talk about the production of electrons from a cathode ray tube, as well as the production of protons by ionizing hydrogen atoms. However more exotic particles do not form most of the ordinary matter we observe around us, so we have to rely on either cosmic rays (proton showers from space), nuclear reactors, or particle accelerators. The last option allows us the most control. We can smash together lots of particles at high energies and watch as they split up into smaller particles.
But then how do we know these particles are there? How can we detect them? Most particle detection relies on the fact that charged particles ionize the matter around them. This can lead to an ionization trail in cloud chambers or bubble chambers. And even uncharged particles can be detected when they split into other charged particles. The way these particles move can help us figure out something about their charge, mass, and other properties.
Now particle interactions can be very neatly described by Feynman diagrams, that show how particles behave over time. In this video we look at one that shows the Coulomb repulsion between two electrons (because they're both negatively charged). In particle physics, a force is mediated by the exchange of a particle, and in this case electron repulsion happens because they exchange a photon.
Ultimately, Feynman diagrams are based on some complicated rules that simplify down to some very beautiful visual rules focusing on the "vertices" between the lines representing different particles.
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Introduction to Elementary Particles (Griffiths) - the book used in this video: amzn.to/3I3ld71
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Can I teach myself particle physics in 1 week?
1:09 - Watch me learn (here's what I did!)
2:27 - What did I actually learn?
3:09 - How particles are produced!
4:55 - How particles are detected!
6:12 - Crossing symmetry (antiparticles moving backwards in time!)
6:54 - Organizing particles into groups
7:28 - Feynman diagrams
We begin by looking at a particle that can only move along a single direction, between two fixed points. These restrictions are absolutely not necessary in order to understand the wave function, but make visualizing it much easier.
In the Copenhagen Interpretation of quantum mechanics, when we make a measurement on a system, we can get a range of possible results (each with its own probability). So in our experiment with a particle along a line, the particle could be found in many different places with different probabilities.
What this translates to in practice is that if we did the exact same measurement on multiple identical systems, we would get different measurement results each time - but the ratios in which we get each possible measurement result corresponds to the probabilities of getting each result.
This is very different to classical physics, where if we did the same experiment over and over then we would get the same measurement result in a very deterministic manner.
The Copenhagen Interpretation, however, links the quantum wave function with the probability of finding our particle in different regions along our line. Specifically, the square modulus of the wave function gives us a probability density for the possible measurement results.
In other words, if we find the square modulus of the wave function and then integrate this to find the area under our function between two points, then this gives us the probability of finding the particle between those two points in real space.
We need to take the square modulus of the wave function, rather than just squaring it, because the wave function can actually be imaginary. However probabilities can only ever be positive, hence the necessity of the modulus.
In this video we also look at why we care about the wave function at all, if the physically measurable quantity is in fact the probability density, i.e. the square modulus of the wave function, and NOT the wave function itself.
Firstly, two systems may be similar in every way except for a "phase difference" in the wave function of one of them, given by some multiple of the imaginary number i, multiplied by the wave function of the other system. This phase difference ensures that the two systems are slightly different to each other. But their probability density functions are the same, because the phase factor disappears when taking the square modulus.
But again, why does this matter, if it's not directly measurable? Well, it turns out that in some circumstances the phase information has important consequences for things we measure experimentally. For example, the double slit experiment produces different results depending on the phase of the wave function representing particles passing through the slits. And the same is true for the Aharonov Bohm effect, for which I've made a full video (linked below).
And most importantly, the wave function is actually the quantity described by the Schrodinger Equation. This is the most important equation in the theory of quantum mechanics, and looks at how the wave function of a system changes over time (based on the properties of the system). It accounts for different kinetic and potential energies in the system to calculate the value of the wave function at every point in space and in time.
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Videos Linked in Cards:
2:32 - Copenhagen Interpretation youtu.be/qJCh53SdS6s
5:08 - Imaginary Wave Function youtu.be/Ms2Y9g0VC-c
6:44 - Aharonov Bohm Effect youtu.be/YMjD8jevTUw
7:13 - Schrodinger Equation youtu.be/BFTxP03H13k
Timestamps:
0:00 - Measuring a Particle's Position, and Probabilities!
1:27 - Identical Measurements in Classical vs Quantum Physics
3:23 - How Probabilities Relate to the Wave Function
4:39 - The Imaginary Wave Function and Its Phase
7:00 - The Schrodinger Equation
7:33 - What the Wave Function REALLY Represents
In this video, we will be looking at two particle interaction processes that commonly occur in our universe. Then, we will see what links these two processes, despite initially looking quite different. Finally, we'll learn a very basic (but not very rigorous) way to understand the notion that antiparticles move backwards in time.
The first process we will study is Compton Scattering. This occurs when a photon interacts with an electron. A photon carries some amount of energy, related to the wavelength of the source of EM waves from which it was created. The larger the wavelength, the less energy it carries. It's worth noting though that all photons travel through space at the same speed - the speed of light.
So when a photon meets an electron, the photon can transfer some energy to the electron so that it starts moving through space. As a result, the original photon is said to be "absorbed" by the electron, and a new photon is released. This new photon has energy equal to the original photon's energy minus the energy given to the electron. This way conservation of energy is obeyed. But also, conservation of momentum is obeyed too! The two new particles (new photon and electron) move through space so that the original photon momentum matches the electron and new photon's momenta.
The second process looks at the interaction between an electron and its antiparticle, the positron. All particle that have the same values for all possible descriptors, e.g. mass, charge, spin, etc., are said to be the same type of particle. All electrons have the same mass, charge, etc. But if we take a particle and we now consider another one with all the same descriptors except the opposite sign of charge, then we are looking at the original particle's antiparticle. In other words, the positron has all the same properties as the electron except it is positively charged rather than negative.
When a particle and its antiparticle meet, they annihilate each other and two photons are released. These two photons combined have the same amount of energy as the initial two particles had. This process is known as pair annihilation. And aside from involving similar particles to Compton Scattering, it seems to be quite different to Compton Scattering.
However, if we use "Crossing Symmetry", we see that these two processes are not so different. When we write any of these processes in equation form, we can take any of the particles on either side (the reactant/before side or the product/after side) and move it to the other side of the equation, provided we turn it into its antiparticle. This is explained in more detail in the video.
If we start with pair annihilation and move the positron to the other side, while doing the same with one of the photons (using Crossing Symmetry of course), we end up with Compton Scattering! These processes are said to be inherently the same "process".
But more importantly, Crossing Symmetry allows us to consider the idea that a particle on the "after" side of any equation would turn into its antiparticle on the "before" side. Since an equation showing particle interactions allows us to see how particles behave over time, we can therefore imagine that the antiparticle is moving backwards in time relative to the original particle. This is one of the very basic explanations of why antiparticles can be though of as moving backwards in time compared to their particles.
In reality though, the mathematics of particle physics makes it difficult to differentiate between a particle moving forward in time, and its antiparticle moving backward in time. That's the true reason for this notion - though it doesn't mean antiparticles do actually move backward in time! It's just a cool thing to think about.
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Particle Physics - Two Processes That Are Surprisingly Similar
0:50 - Process 1: Compton Scattering
2:30 - Antiparticles: The Very Basics
3:56 - Process 2: Pair Annihilation
4:26 - Crossing Symmetry
6:15 - Do Antiparticles Move Backwards In Time? A Visual Analogy
Maxwell's Equations are a set of 4 equations that describe how electric and magnetic fields behave within our universe, as well as how they interact with each other. In this video, we look at each of the terms found in these equations. #maxwell #electromagnetism #fields
The first equation (Gauss' Law for Magnetism) states that the divergence of the magnetic field is equal to zero. In other words, it describes how any magnetic field must behave in order to exist in our universe. A magnetic field is a vector field that describes the forces exerted on external magnets placed in the field. The direction shows the direction of the force exerted on the north pole of the external magnet, while the size shows the strength of the force.
The divergence of this field (calculated using the vector operator nabla), can be geometrically interpreted as the amount of field flowing out minus the amount of field flowing in. Since the divergence is zero, this means all magnetic fields must flow in and flow at at exactly the same rate out of any closed volume we may choose.
The second equation (Gauss' Law for Electricity) states that the divergence of the electric field is equal to the charge density in a given region of space divided by the permittivity of free space (electric constant). An electric field shows the forces exerted on a positive external charge placed in the field. And the divergence of this can be nonzero.
This divergence depends on the density of charge found within the considered volume of space. The larger the charge density, the larger the divergence. And the sign of the charges determines the sign of the divergence of course.
The permittivity of free space is a constant that determines how strong an electric field can be generated by a given charged object within our universe.
The third Maxwell equation (Maxwell-Faraday Equation) states that the curl of an electric field can be found by calculating the rate of change of any magnetic field in our system. In other words, a changing magnetic field can generate an electric field. The curl operator can be thought of as measuring the "circulation" of the field, which we discuss in this video.
The final Maxwell equation (Ampere's Circuital Law) states that the curl of the magnetic field can be found by calculating the rate of change of an electric field, as well as by calculating the displacement current, with some factor of the permeability of free space also included.
The permeability of free space is a constant that determines how strong a magnetic field can be generated by a given magnet within our universe.
The displacement current is a term Maxwell added to an equation that already existed, but was incomplete. It basically refers to the net charge flowing in or out of the region of space we are considering - hence a current. A magnetic field can be generated by changing the electric field within the region of space, or due to a flow of current in or out of the space.
All of these equations can be combined to create the electromagnetic wave equation, which describes how EM waves move through a vacuum at the speed of light. Also, the speed of light is directly related to both the permittivity, and the permeability of free space.
Here are some useful links.
My Maxwell's Equations playlist: youtube.com/playlist?list=PLOlz9q28K2e6aNgl1zt1xccyy4Ofl3YAk
My video on the del / nabla operator: youtu.be/hI4yTE8WT88
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - The 4 Maxwell Equations
1:03 - Equation 1, Gauss' Law for Magnetism
3:15 - A Word from Wren, Our Sponsor
5:06 - Equation 2, Gauss' Law for Electricity
7:52 - Equation 3, Maxwell-Faraday Equation
10:30 - Equation 4, Ampere's Circuital Law
11:54 - Fun Fact About the Speed of Light!
#ad this video was sponsored by Wren!
Well, this is true if we have a given measurement device with a specific resolution. In this video we look at a length measurement made using a ruler with markings every millimeter.
Depending on what measurement system we use, we can say that any measurement the ruler makes will have a possible error of +/- 0.5 mm.
This is because any length between 4.5 mm and 5.5 mm (for example) would be rounded to a measured length of 5 mm.
So from this, we can see that the percentage (or fractional) error in this measurement is (0.5/5)*100 = 10%, which is quite large. In other words, a half millimeter error either way can have quite an impact on a 5 mm measurement.
However for a much larger measurement, say 50 mm, we can see the fractional error is much smaller - only 1%. This means that the range in which our measured length could actually fall is a much smaller proportion of the actual measured length, even though the absolute error is still the same at 0.5 mm.
Therefore, for a given device with a given resolution, try to make longer measurements if possible. This isn't so useful for measuring lengths, but certainly comes in handy in other scenarios.
For example, when measuring the period of oscillation of a pendulum, we can choose to make a single time measurement for many oscillations and then divide that measured quantity by the total number of oscillations. Since the number of oscillations is an integer without any error, the final fractional error in the period measurement is much smaller than if we just measured the time taken for one oscillation.
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
In this video, we look at the full version of the mass-energy equivalence relation, which accounts for an object's rest mass, as well as its momentum. This means the full equation can be used to describe objects that are stationary as well as objects that are moving relative to the observer. In contrast, the more famous, shorter version, E equals mc squared, can only be used to describe stationary objects. This is because the momentum of a stationary object with mass is equal to zero, so the equation reduces down to the well known version.
We see how the full mass-energy equivalence relation can be expanded in powers of v/c (that is, speed of object divided by speed of light) for values of v that are much smaller than the speed of light. And the largest term in this expansion looks exactly like the classical kinetic energy of the object. So in other words, the total energy of the object is given by its rest mass energy, as well as the energy it gains due to movement (or its kinetic energy). The relation just tells us that the movement energy it gains is slightly different to the classical kinetic energy.
In addition to this, the long version of the equation can also be used to describe massless particles, such as photons. Photons are particles of light, and their mass is zero. Therefore, the mass-energy equivalence relation reduces down to E = pc, where p is the momentum of the photon.
In this video, we also look at why photons have momentum, despite high school physics often teaching us that momentum is given by p = mv. This only applies to objects with mass however! Massless objects can also carry momentum, and a photon's momentum is related to the frequency of the light source (or equivalently the wavelength).
We know photons carry momentum because when they interact with massive objects (objects with mass), the photon can transfer momentum to the object with mass. This is all in accordance with the Principle of Conservation of Momentum, and no mathematical fudging needs to be done in order to make this work. So this is not just a trick used to make Conservation of Momentum work, but rather we see experimentally that photons carry momentum, in a predictable and measurable way.
en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - The Most Famous Equation is Incomplete! E = mc^2
1:46 - The FULL Mass-Energy Equivalence Relation (incl. Momentum)
3:38 - A Moving Object Gains... Kinetic Energy! (ish)
5:07 - Massless Particles (e.g. Photons)
5:56 - Momentum (incl. for Massless Particles)
7:33 - Summary of the Full Mass-Energy Equivalence Relation
In this video we look Galilean relativity in some basic detail. We start by recalling that two observers, moving relative to each other, may observe the motion of an object differently to each other. Observers moving at a constant speed to each other can be described by the "standard configuration" in relativity, as discussed in episode 1 of this series!
The special case we look at here is when the first observer sees the object as being stationary, while the other (moving at a speed v to the first) measures the object to be moving at a constant speed. We see how this looks from either perspective, and then plot a distance-time graph for each observer.
In order to transform from one observer's coordinate values for the object to the other observer's values, we need an equation that links the two. When calculating the second observer's coordinate values, we can see this should depend on the first observer's values, the time elapsed since the two reference frames crossed, and the relative speed between the two frames. The equation is x' = x - vt, where x' is the second observer's coordinate value at a time t, the time elapsed since the frames crossed, and v is the relative speed.
We also see that from one observer's perspective, the other seems obviously wrong as their coordinate system is "moving". However, switching to the other reference frame gives us the opposite perspective.
Interestingly, we assume a "universal time" in this analysis. We imagine that both observers agree on how long it's been since t = 0, which is when the two reference frames exactly align. This implies there is some unique reference frame in the universe, whose time measurement applies to every other reference frame. This, however, is not true in reality. We can use this discussion about universal time to set up a discussion about special relativity in a future episode!
Episode 1: youtu.be/1fJwbpS_OZg
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Basic Equations of Galilean Relativity
0:35 - Two Observers and their Reference Frames
2:08 - The Equations!
3:11 - Is One of the Reference Frames "Wrong"?
3:59 - The Assumption of Universal Time
4:49 - Setting Up for Special Relativity
In this video we start by understanding how black holes behave. They are extremely dense objects, meaning all their mass is packed into a very small region of space. Objects with mass will warp the spacetime around them (according to general relativity), and black holes are dense enough that not even light, the fastest mothing thing in the universe, can escape a black hole once it crosses the event horizon. The spacetime is warped so that anything can only move towards the center of the black hole.
For this reason, we have no way of knowing what happens within a black hole's event horizon. We can only know a small number of basic properties of black holes (mass, charge, angular momentum/spin, radius), but we cannot know things like the mass distribution inside the black hole. This means that from any external observer's perspective, two black holes that were formed of different types of matter, or in different ways, appear identical if they have the same mass, charge, etc. This is known as the no-hair theorem, meaning black holes do not have "hair" on the outside that allows us to know stuff about their insides.
This is problematic because the information about the stuff that made up the black hole, or what is happening inside the black hole, is inaccessible to anyone outside the event horizon. This might not be an issue, as an observer within the event horizon may still be able to access it, so it's there somewhere. However the real problem comes in when we consider Hawking radiation.
Hawking radiation is a quantum mechanical and general relativistic effect, where black holes emit blackbody radiation due to their temperature. Technically speaking, this radiation, in the form of photons, is created a small distance OUTSIDE the event horizon, so nothing is actually leaving the black hole itself. However this Hawking radiation then leaves the black hole, carrying away energy. This results in the black hole losing mass - it effectively becomes smaller.
If Hawking radiation is to exist, this would mean that black holes can eventually disappear once enough Hawking radiation is carried away. And because Hawking radiation only depends on externally known properties of the black hole, this means two black holes with the same initial mass will emit the same Hawking radiation. Thus we cannot know information inside either of the black holes, and if they were to disappear, the information would be lost forever! Where is this information going, and how is it leaving the universe?
This is the Hawking paradox, also known as the Black Hole Information Paradox. For a while scientists suspected we would need some new mysterious physics to explain this, as the answer is not readily apparent in either general relativity or quantum mechanics. However recently some scientists discovered that using a theory of quantum gravity (mixing both quantum mechanics and general relativity), it was possible to actually figure out the internal state of a black hole. This kind of combination is what we expect in the ever-elusive Theory of Everything, or theory of quantum gravity.
This was done by studying the quantum wave function of the black hole, as well as the gravitational field generated by the hole a large distance away. The gravitons (gravity particles) generated by the black hole were shown to be different depending on the wave function of the black hole, which itself depends on the internal properties of the black hole. In other words, it may be possible to measure internal properties of the black hole by measuring its gravitational field far away from it. Or in other words, black holes MIGHT actually have hair, thus resolving the paradox.
In this video, we also look at why we shouldn't get over-excited about this theory yet, as it's just one possible explanation. But it is, at least, a little bit exciting!
Hairy black holes: en.wikipedia.org/wiki/No-hair_theorem
Hawking radiation: en.wikipedia.org/wiki/Hawking_radiation
The research paper in question: journals.aps.org/prl/abstract/10.1103/PhysRevLett.128.111301
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Black Hole Basics
2:06 - No-Hair Theorem
4:15 - Hawking Radiation
6:06 - Hawking's Paradox (Black Hole Information Paradox)
6:55 - Cautiously Optimistic
7:45 - Gravitational Fields, Wave Functions, and Gravitons
Welcome to Relativity... Relatively Quickly - a series looking at the most basic concepts of three theories of relativity. This is the first episode - what is relativity?
In this video, we start from the beginning. We look at what relativity means, and how it discusses the relative motion between objects. From the perspective, or reference frame, of one observer, they are stationary and everything else around them can be either stationary or in motion. In this video we see how an asteroid moving past a spaceship has motion according to people in the spaceship. However from the perspective of the asteroid, the spaceship is moving and the asteroid itself is stationary.
Both reference frames are correct from their own perspectives. Since we are studying constant speed motion in a single direction, we won't look into too much detail about inertial reference frames, though this will be discussed later.
On Earth, we can think of walking along the ground as being due to us remaining stationary, and the Earth moving beneath our feet. However this is trickier to think about because we always use the Earth as our "stationary" frame of reference. The point is, though, that the Earth moves around the Sun (in the Sun's reference frame), and the Sun moves around the center of the galaxy, and this moves around a common center of mass. There is no one special stationary reference frame.
In this series, we'll study Galilean relativity, special relativity, and general relativity. The first of these is based on "common sense" Newtonian physics, that makes intuitive sense to us. However our intuitions don't work in scenarios that we do not commonly experience as humans. One example of this is travelling at very high speeds (close to the speed of light).
When studying objects moving relative to each other in Galilean relativity, we can use what is known as standard configuration. This is when one reference frame moves relative to another along their shared x (or x') direction with a constant speed v. In addition to this, it is agreed that both reference frames agree on a time coordinate of zero, meaning t = t' = 0, and this occurs when the frames overlap so x = x' = 0.
We jump back and forth between two comoving reference frames and look at how each one would perceive the motion of a third object. We do this by plotting a distance-time graph for each frame. One frame could see the object as stationary, while another could see it as moving! And this is all true without accounting for the spacetime bending weirdness of special and general relativity.
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - The basic concepts of relativity: what is it?
2:32 - All the theories of relativity.
3:27 - Big thanks to Wren for sponsoring this video!
5:13 - Galilean relativity and reference frames
7:41 - Standard configuration
8:32 - Weirdness in other relativity theories
#ad this video was sponsored by Wren!
The concept of "wave function collapse", or "collapse of the wave function", is one of the most intriguing aspects of quantum mechanics. It's also one of the reasons why quantum mechanics doesn't make intuitive sense to us yet.
Every quantum system can be described by a wave function. This is a mathematical function that contains all the information we know about our system. When we square it (square modulus), we can calculate the probability of getting different measurement results if we were to make a measurement on the system. For example we can calculate the likelihood of a particle being found in a particular region of space, or in a particular energy level, or any other measurement outcome.
According to the Copenhagen Interpretation of Quantum Mechanics, a system exists in a superposition (blend) of lots of different measurement states all at once. The "weighting" of these states is directly related to the probability of finding the system in each of these states, as seen from the wave function. In other words, more likely measurement result states are more heavily represented in the superposition. And when we make a measurement, the system randomly and discontinuously collapses into one of the possible measurement states. We have no way of knowing which state a particular system will collapse into. This is known as the collapse of the wave function. It is one of the quirks of quantum mechanics.
This is very different to the system already being in a state and then a measurement just gives the observer information about what state the system is in. Check out this video if you want to find out more about how these two ideas are different, and why quantum mechanics goes with the first idea: youtube.com/watch?v=LR5kfhrs4Cc
This idea can lead us to believe that we influence the universe by making measurements. However, the physics idea of measurement is still being debated, and could even involve interactions between systems without a conscious observer. Consciousness is not necessarily the key to causing wave function collapse.
Additionally, this strange idea is very much a part of the Copenhagen Interpretation of Quantum Mechanics. In fact, it forms one of the postulates (assumptions on which the theory is based). Other interpretations of the mathematics try to get around this, but have different strengths and weaknesses compared to the Copenhagen Interpretation.
Before a measurement is made, the wave function follows the Schrodinger Equation, which dictates how wave functions evolve over time. Depending on the system and the initial conditions, the wave function can be constant or changing smoothly (continuously) as a superposition of different states.
At the instant the measurement is made, the wave function discontinuously (randomly, suddenly) collapses into one of the possible measurement states. This part is NOT dictated by the Schrodinger Equation. The probability of getting any particular result can be calculated from the wave function JUST BEFORE the measurement was made.
After the measurement, the wave function once again begins to follow the Schrodinger Equation smoothly, with the measurement result as the new initial state. The system may once again stay in that state, or change over time and "spread out" over multiple states.
We also look at how the Copenhagen Interpretation deals with measurement results for continuous and discrete variables.
Thanks for watching, please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Why Quantum Mechanics makes no sense - wave functions
2:10 - Superposition of states in the Copenhagen Interpretation
3:31 - Collapse of the wave function
4:23 - Measurement? Interpretations of Quantum Mechanics?
5:30 - Before, during, and after: Schrodinger vs Discontinuous
8:04 - Discrete vs Continuous measurement results
8:35 - Big thanks to Squarespace - link in description!
9:30 - Outro
#ad This video was sponsored by Squarespace!
Many of us will be familiar with the concept of using coordinates to represent positions in space. In 2 dimensions, we most commonly use x and y coordinates. These are coordinates that are perpendicular to each other (orthogonal) and always point in the same direction. When we study 3D systems, we add a third z coordinate which is perpendicular to x and y. This coordinate system is known as the Cartesian coordinate system, named after Descartes.
However in some situations, the Cartesian coordinate system is not the most convenient one to use. In this video we see how in two dimensions, a polar coordinate system made up of a radial and angular / azimuthal coordinate can equally validly represent any point on a flat plane. The radial coordinate is formed using the length of the vector between the origin and the point we are describing, and the angular coordinate is the angle between this vector and the positive x axis. This polar coordinate system (circular polars) is best used to describe systems with circular symmetry such as a spinning disc, or the rings of Saturn.
We can then extend this to 3 dimensions by simply adding in the z axis from the Cartesian coordinate system. This way, the x and y coordinates are replaced with the radial (r or rho) and angular (phi) coordinates, while the z coordinate remains the same. This coordinate system (cylindrical polar) is best for describing systems with cylindrical symmetry. This doesn't just mean perfect cylinders, but rather any system where rotating about just one axis (z axis) leads to invariant quantities (or no change being seen). Hence the new coordinates are (r, phi, z) rather than (x, y, z). And we also see how the coordinates are always orthogonal.
Another coordinate system, known as the spherical polar coordinate system, best describes systems with spherical symmetry (such as the electric field generated by a point charge). With cylindrical polars, the radial coordinate now represents the distance between the point being described and the origin in any direction, not just the z = 0 plane. The angle phi remains the same, which is the angle between the vector's projection in the z = 0 plane, and the x axis. And a new angle, theta, is defined to be the angle between the r vector and the z axis. Hence the new coordinates are (r, theta, phi).
In this video we also look at two problems with the circular, cylindrical, and spherical polar coordinate systems. The first one is an easy one to solve. It involves the periodicity (repetition) seen when the value of phi or theta exceeds 360 degrees. In other words, there are multiple valid values for each angle even when describing the same point in space. However this can be fixed by restricting the value of theta of phi between 0 and 360 degrees. Sometimes this isn't even necessary as certain physical systems need there to be flexibility past 360 degrees.
The second problem is harder to solve. In these new coordinate systems, the value of phi or theta is not uniquely defined at the origin. In other words, when r = 0 the value of any of these angles could be absolutely anything. And there is no easy way to get around this problem. However the polar coordinate systems are much better at describing circular, cylindrical, and spherical systems so we deal with the problem as it's more convenient than using Cartesian coordinates here.
Please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - 3D Cartesian Coordinate Grid
2:40 - A message from our sponsor, Wren - check out the link below!
4:31 - Circular Polar Coordinates (2D)
8:20 - Cylindrical Polar Coordinates (3D)
9:47 - Spherical Polar Coordinates (3D) + Electric Field
12:25 - Problems with Polar Coordinates
Videos Linked in Cards:
youtu.be/pTMh1yyqVC8
youtu.be/YMjD8jevTUw
youtu.be/ci06S-jNn8U
#ad this video was sponsored by Wren.
In this video we will look at Albert Einstein's actual quote about quantum mechanics and understand his discomfort with this area of physics. He apparently did not believe in a personal or religious God, but rather referred to the as yet unknown forces driving the universe as "God".
Let's remember that in quantum mechanics, any system is described by a wave function, which can be used to calculate the probabilities of getting any allowed measurement result when we study a system. This applies to the positions of particles in space, or the energy level they will be found in, or any other possible measurement result. Quantum mechanics, or rather the Copenhagen Interpretation of quantum mechanics, also says that before making a measurement, the system itself is in a superposition or blend of all possible allowed quantum states. And once we make a measurement, it randomly collapses into one of the allowed states (with probabilities given by the wave function squared). This is quite different to the system already being in a particular state, and then us just finding out this information when we make the measurement. In fact, this video shows how the two scenarios are mathematically different, and that experiments can be done to show the latter is not true: youtube.com/watch?v=LR5kfhrs4Cc&t=0s
So what did Einstein actually say about quantum mechanics? Here's an excerpt from a letter he wrote to Born in 1926: "Quantum mechanics is very worthy of respect. But an inner voice tells me this is not the genuine article after all. The theory delivers much but it hardly brings us closer to the Old One's secret. In any event, I am convinced that He is not playing dice". This shows that Einstein respected quantum mechanics a lot, but felt it might be incomplete.
He was uncomfortable with the idea of a system randomly collapsing into a state when a measurement was made. This is because this idea breaks the principles of determinism and locality, both of which are extremely important in Einstein's theories of Special and General Relativity. We see what is meant by determinism (realism) and locality, as well as how the Copenhagen Interpretation breaks both principles. For example, we can see how weather forecasts could be perfect in a deterministic world, if we could gather enough data. Also, we understand locality is related to causality, and how events can only affect each other if we allow enough time for light (or some slower signal) to pass between them.
Quantum mechanics breaks both principles because the random collapse is not deterministic, and a pair of entangled particles separated by large distances could break locality. This is because a measurement on one particle could result in an instantaneous collapse of the other (due to the wave function describing both particles in the system). This is a simplified description of the EPR Paradox.
Einstein proposed hidden variables to resolve the EPR paradox. These variables would not be accessible to observers, but would deterministically and locally drive quantum systems in a way that would appear random to us. However a few years later, John Bell came along and developed Bell's Theorem. This quantified the difference between the Copenhagen Interpretation, and Einstein's Deterministic Local Hidden Variable theory. This allowed us to conduct experiments to see which of the two was likely. And Einstein was proven... WRONG?!
Well, at least partly. Hidden variable theories that were both local and deterministic were ruled out. However hidden variable theories that were deterministic and non-local are still possible. And the Copenhagen Interpretation is still possibly correct.
Please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Here are some affiliate links for things I use! I make a small commission if you make a purchase through these links.
Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera: amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone (Fifine): amzn.to/2OwyWvt
Gorillapod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - This Quote from Einstein is Famous - WHY?
0:55 - Basic Principles of Quantum Mechanics
4:06 - Einstein's Opinion on Quantum Mechanics
6:07 - Important Principle #1 - Determinism
7:35 - Hidden Variable Theory
8:00 - Important Principle #2 - Locality
9:22 - EPR Paradox
11:32 - Bell's Theorem
Videos Linked in Cards:
1) youtube.com/watch?v=w9Kyz5y_TPw (Wave Functions)
2) youtube.com/watch?v=fBR5HQ-Ja10 (EPR Paradox)
#imaginarynumber #complexnumbers #physics
In this video, we'll look at the basics of complex and imaginary numbers, and how they are used in physics!
To begin with, we define the "imaginary number", i, as being the square root of -1. We're often told that negative numbers cannot have a square root, but imaginary numbers are based on the idea that they can. Engineers often use j to represent the imaginary number but we'll stick with i.
An imaginary number can be added to a "real" number (one which does not have a factor of i) in order to create a "complex" number. We look at how two real numbers can be added together, as well as multiplied together.
Imaginary numbers do not fall on the (real) number line, but we instead are found on a perpendicular axis to the number line. That way, we have a real axis and an imaginary axis creating an abstract space. This graph/space is known as an argand diagram, and can be used to represent any complex number. The way to do this is to start at the origin, move as many units in the real direction as the real component, and then as many units in the perpendicular, imaginary direction as the imaginary component. The point we end up at represents our complex number.
The complex number can also be represented with a vector from the origin to the corresponding point on the argand diagram, so its horizontal component is the real part, and its vertical component is the imaginary part. Using this knowledge, as well as basic trigonometry, we can define two new quantities known as the absolute value, or modulus (length) of the vector, and the argument (angle from the real axis). These two pieces of information are equally as good at defining a complex number as knowing its real and imaginary parts.
We can take this information to write a complex number in terms of its absolute value, and the sines and cosines of its argument. However this last part can be converted to a much simpler complex exponential using Euler's identity (en.wikipedia.org/wiki/Euler's_identity). We cover the basics of the exponential function as well as how much easier it is to deal with complex exponentials than sines and cosines (as exponentials are easier to multiply).
We then look at two scenarios in physics where we need to represent systems by using sines and cosines. The first is a mechanical harmonic oscillator, such as a mass oscillating on a spring. Instead of dealing with the sine (or cosine) representing the motion of the mass, we can represent it using a complex number evolving over time, do any calculation necessary, and then simply take the real part of the complex number. Taking the real part involves just reading the real part and ignoring the imaginary part. This works because the two components are separate from each other (or perpendicular on the argand diagram). The same logic can be used to represent electric circuits with a sinusoidal input potential difference. This is useful when we have capacitors, inductors, or resistors in our circuit as the voltage and current are not always in phase.
Finally, we look at how quantum wave functions are complex. Although the square (modulus) of a wave function relates to real, measurable probabilities, and the square modulus is not complex, the complex nature of the wave function can be measured in more subtle and indirect ways in effects such as the Aharonov-Bohm effect. Check out the links below for more info, as I've made a full video discussing it.
Videos linked in Cards:
youtu.be/Zao9JV1BLg8
youtu.be/w9Kyz5y_TPw
youtu.be/YMjD8jevTUw
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - What are Imaginary, Real, and Complex Numbers, and How Do We Add Them?
3:18 - Representing Complex Numbers on an Argand Diagram
5:08 - The Modulus and Argument of a Complex Number
6:10 - Trigonometric Identities and Exponential Functions
7:59 - Euler's Identity (and Why We Bother With It)
9:28 - Oscillating Mass on a Spring and Complex Numbers
10:23 - Alternating Current Power Sources
12:19 - Quantum Complex-ness
14:31 - Big thanks to Squarespace for Sponsoring!
15:27 - Outro
#ad This video was sponsored by Squarespace!
#degeneracypressure #quantum #neutronstar
In this video, we'll be looking at degeneracy pressure - a quantum mechanical effect that prevents the collapse of stars!
During the life cycle of a main sequence star, like our Sun, there are two major competing forces: the inward-acting gravitational force, and the outward-acting forces created by nuclear fusion in the core. In general, these forces are in balance because the star has a stable size.
However once all the hydrogen in its core runs out, the star has to start fusing helium. This continues until there is no more fusion fuel in the core. At this point, the gravitational forces dominate, causing the star to collapse into a white dwarf (for medium-sized stars).
A white dwarf is a dense bit of matter that is no longer radiating energy due to the lack of fusion. But why does a white dwarf not collapse down even further into a smaller region of space? We may think that electrostatic repulsion between the charged particles making it up have something to do with this. However stars are usually large enough that this electrostatic repulsion can be overcome by the strong gravitational forces.
The effect preventing this further collapse of white dwarves is in fact degeneracy pressure. To understand it, we start by remembering that every quantum system can be described by a wave function. This includes multi-particle systems. Also, electrons are all indistinguishable particles - we cannot tell one from another. This results in some very interesting properties, including that fermions (a category of indistinguishable particles) have an antisymmetric wave function. All this means is that when we swap two of the particles in the system, the wave function becomes negative of what it was before.
Finally, we recall that an electron wave function consists of a spin part and an orbital part. Due to the antisymmetric nature of the wave function, no two particles can have exactly the same spin and orbital parts. If they did, then we could swap those two particles and the wave function would remain the same - showing symmetric, or "boson" behaviour, not fermion behaviour.
This is why multiple electrons cannot occupy the same quantum states in an atom - it's Pauli's Exclusion Principle! If electrons are to occupy the same orbital state, then they must have different spins - and there are only two possible spin states, up and down. Therefore, each orbital state can only hold two electrons. And this is important because orbital spin states are closely related to where we can find electrons in space.
Within a white dwarf, the gravitational forces causing collapse force the electrons into lower energy states as they occupy a smaller region of space. But eventually, all the lowest energy states are occupied (remember, only 2 electrons per state). And many electrons have much higher energy than this because the lower states are full. So this creates a degeneracy pressure that prevents electrons from being placed into even lower energy states.
Another way to look at this is that the gravitational forces tend to force electrons into the same region of space - or the same orbital states. But this cannot happen for more than two electrons at a time, so the white dwarf cannot be compressed any further than a specific size!
This same effect is found in neutron stars too, because neutrons are fermions. And since neutrons are not charged, we know it's not the electrostatic repulsion causing the gravitational collapse to halt!
Videos linked in the cards for this video:
1) youtu.be/w9Kyz5y_TPw
2) youtu.be/skFU7pmBOys
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Degeneracy Pressure, and the Force Balance in Stars
1:50 - White Dwarf - What Happens After Fusion Stops?
3:05 - Indistinguishable Particles and their Wave Functions
8:22 - Degeneracy Pressure in White Dwarves
#ad - This video was sponsored by Squarespace.
An electric field (often called the E-field) represents how charged objects interact with each other. Specifically, a charged object generates an E-field, and this shows what happens to a positive charge when placed near the source charge. The direction at each point shows the direction of the electrostatic force exerted on the new charge, and the size shows the magnitude of the force.
But it turns out there is also another kind of electric field - the electric displacement field (or D-field for short). it's related to the E-field by D = (epsilon)E. Epsilon is known as the permittivity of the material we happen to be studying, in which our fields are present.
In simple cases, epsilon is just a scalar, which represents the polarizability of the material. It measures how easily positive and negative charges are separated within the material due to an applied E-field. In more complicated materials, the polarizability can be direction-dependent (as it's easier to move charges in one direction over another).
Here we only focus on the simple cases where epsilon is a scalar. In these scenarios, the E and D fields are basically proportional to each other and serve the same purpose. But even in these cases, it's interesting to study the boundary between two materials. Even though the E-field may behave continuously across the boundary, the D-field may not as the polarizabilities of the two materials may be wildly different.
Here's a great wiki page if you want to read more: en.wikipedia.org/wiki/Electric_displacement_field
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
In this video we'll be studying 3 quasiparticles (sometimes known as collective excitations). They don't actually exist, in that they are not fundamental particles themselves, but can be thought of as mathematical simplifications of more complex systems.
The first quasiparticle we'll look at is the phonon. We look at a sound wave passing through a neatly arranged grid of atoms in a solid. Transferring energy to one end of the solid, we see that the atoms will oscillate in a special way so that the energy is transferred through the solid to the other side. This oscillation of the atoms is known as a sound wave.
If we want to pass multiple sound waves through the material at the same time, then in order to study these in detail, we have to look at the movement of each individual particle. This becomes extremely time-consuming. Instead, we can treat each sound wave as a made-up quasiparticle. Since each sound wave is represented by a "phonon", we can study how multiple sound waves interact by looking at how multiple phonons interact. And this is simpler than studying the motion of millions of atoms, because phonons follow some "common sense" physics laws such as conservation of momentum. We can also study the interaction of phonons with other particles, like photons (which are considered to be real particles)! I've made a video discussing phonons in more detail, you can find it here if you're interested: youtube.com/watch?v=_axrpVnGHpk
The next quasiparticle we'll discuss in this video is the electron hole. Atoms can form covalent bonds with each other by sharing electrons in order to have full outer shells. This is a stable configuration. However if we provide some amount of energy to our atoms, this can cause an electron to leave a covalent bond and break it. This free electron moves away through the lattice and leaves behind an "electron hole". This hole can be filled by another electron from a nearby bond, which means the hole moves to this second bond.
Sometimes, free electrons can come and fill the hole. This process is known as recombination, and we don't discuss that in this video. Instead, we focus more on bound electrons moving to fill a hole, resulting in the movement of this hole through the grid of atoms. Studying the movement of a hole is easier than looking at each of the individual electrons that move around the lattice in order to fill where the hole was previously. And on top of this, we can give some properties to the hole that form the entire foundation of semiconductor physics.
For example, when a hole is formed due to an electron gaining enough energy to leave the covalent bond, which then travels to other parts of the solid, the number of protons in the nuclei surrounding the hole is larger than the number of electrons in the surrounding area. There is an excess of positive charge, and this positive charge can actually be assigned to the hole. As the hole moves around, we can see (roughly) the motion of excess positive charge - not because the positive charges are moving, but because the regions of missing negative charge (i.e. holes) are moving.
The third quasiparticle we will look at is the electron quasiparticle. This involves looking at a free electron moving around a periodic potential (i.e. regular arrangement of nuclei / charged particles). In a real scenario, the electron will be affected by the periodic (regular) arrangement of charges, so will not move in a straight line at a constant speed. Instead, we can devise a new particle that moves through an assumed vacuum, with similar properties to the electron but with a different mass. This is useful is because we can assign the periodic accelerations of the electron (from the surrounding charges) to the quasiparticle's different mass. In other words, we imagine the quasiparticle doesn't experience any forces, and its average motion is the same as the electron's average motion. In certain systems where the periodic potential varies depending on the direction in which the electron is moving, the "effective mass" of the quasiparticle is direction-dependent!
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Physicists Make Up Imaginary Particles
0:24 - Phonons
2:06 - Electron Holes
6:06 - Electron Quasiparticles
Sometimes, certain problems in quantum mechanics become unsolvable due to their mathematical complexity. But we still have techniques for approximating their solutions! One such technique is perturbation theory - let's see how we can use it. #perturbation #quantum #approximation
To begin this video, we will look at how we study quantum physics problems in the first place. We recall that every system has an associated wave function. For example if our system is an electron in space, then the wave function of that electron will give us the likelihood of finding the electron at different points in space. This is discussed in more detail in my wave functions video!
But how do we actually find the wave function of a system? Well, we have to solve the Schrodinger equation of course! This is the governing equation of the theory of quantum mechanics, and we plug in information about our system (such as kinetic energy and potential energy or potential well of the system), in order to solve for the allowed wave functions. Specifically, we plug the information about the system into the Hamiltonian of the Schrodinger Equation.
If we know how to solve the Schrodinger equation once we plug in the system's properties, then we can calculate the allowed wave functions (and energy levels) of the system. The energy levels are of course discrete rather than continuous, which is what is referred to as quantization.
But what happens when we cannot solve the Schrodinger equation for a given system? What if we don't have enough mathematical skills or techniques to solve a particular differential equation? One way to solve such problems is numerically, using a computer. And what about if we don't have a computer?
In such scenarios, physicists have developed some clever techniques to find approximate solutions to our equation. One such technique is perturbation theory. It works best for systems that are very close to other systems that we DO know the solutions for. In this scenario, the phrase "very close" means the new system can be described as the original system plus some small change. The example used in this video is the addition of a small dirac delta function (spike) in the middle of a square potential well.
Then, the new system's Hamiltonian can be written as the old system's Hamiltonian plus some small change. Usually we also multiply the new / added small change by a factor lambda, that helps us in our upcoming mathematical steps. Lambda takes values between 0 and 1 as we go from the unperturbed, original system (lambda = 0) to the perturbed, new system (lambda = 1).
We can then say that the new system's allowed wave functions are equal to the old system's wave functions plus a small term proportional to lambda, plus a smaller term proportional to lambda squared, and so on. This forms an infinite series of "corrections" to the original wave function. We don't have time to calculate infinitely many terms, but luckily for most situations just the first new term is enough. And exactly the same logic applies for energy levels.
Luckily, the first order correction just depends on the change between the old and new systems, and the wave functions of the old system. And nothing else. The first order energy level correction is something we know how to calculate, meaning we don't have to deal with an "impossible" differential equation whilst still getting a very good approximation.
And this is why perturbation theory is a very valuable technique for solving (or at least approximating) "impossible" to solve quantum mechanical systems.
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
My Quantum Mechanics Playlist (with lots of the Card Videos): youtube.com/playlist?list=PLOlz9q28K2e4Yn2ZqbYI__dYqw5nQ9DST
Timestamps:
0:00 - How Problems are Solved in Quantum Mechanics (Wave Functions, Schrodinger Eqn)
3:12 - Energy Levels and Wave Functions for Quantum Systems
4:53 - Perturbation Theory (for a Perturbed System)
6:30 - Sponsor Message (and magic trick!) - big thanks to Wondrium
8:55 - Approximating the new Wave Functions and Energy Levels
10:00 - First Order Approximation - EASY!
#ad - This video was sponsored by Wondrium
#electromagnetism #electricfield #maxwell #ad
We can't just make up a vector field and assume such an electric field exists in real life! The components of the field in any system have to follow a simple rule in terms of how they relate to each other.
In this video, we start by looking at what is meant by an electric field. We see how electrically charged particles and objects can generate electric fields. They represent what happens when a small positive charge is placed close to the object generating the field. For example, if the field is generated by a negative charge, then the small positive charge will be attracted to it. The closer the two charges, the stronger the attraction.
The electric field of any system / object can be thought of as a vector field, meaning it can be represented by a vector at every point in space. The size of the vector indicates the size of the force exerted on the small positive charge, and the direction indicates which way the force will act.
This means we can visually represent an electric field with either a bunch of arrows, or a set of column vectors (with 3 spatial components) at every point in space. This is important to understand, because each component of the electric field vector can depend on the spatial position that corresponds to that vector, as well as any of the other directions! In other words, the x-component of the electric field, which represents the amount of the electric field pointing in the x-direction at a particular point in space, can depend on the x, y, AND z positions of that point in space.
This becomes important for us when looking at the "simple rule" in question. To understand this rule, we first look at one of Maxwell's Equations of Electromagnetism. It's the one that describes the curl of any electric field as being the negative time rate of change of any magnetic field in the same region of space.
For simplicity, in this video we study systems where the right hand side of the equation, looking at the time rate of change of the magnetic field, is zero. This means we are studying systems with a constant (or zero) magnetic field. We can equate the components of the vector on the left hand side (curl of E) with the zero vector components on the right.
One of the components of curl(E) is formed of the rate of change of an E-field component with respect to another direction, minus the rate of change of the second E-field component with respect to the first direction. For example, the x-component of the curl of E is given by dE_z/dy - dE_y/dz. Since each of these components is equal to zero in our scenario, we can set each of these derivatives to be equal to each other.
What this tells us is that the electric field behaves in a very specific way. The rate of change of the y-component of the field, as we move along z, MUST be equal to the rate of change of the z-component as we move along y. And similar relations exist for the x-z components, and x-y components.
This means real electric fields have a big constraint on how they can behave. We cannot just make up a vector field and expect to find a real field like that in the universe. Even a changing magnetic field changes the restriction on the E-field components, but does not lift it.
The physical significance of this is to do with electric potential difference, or voltage as we discuss it in the study of electric circuits. When a particle moves around in an electric field, and then returns to its original position, the potential difference must be zero (as it returns to the same potential). This is why the RHS of our Maxwell equation became zero.
Videos linked in the cards for this video:
Electric Fields: youtube.com/watch?v=gSI3PuHQO9A
Maxwell's Equation: youtube.com/watch?v=6Aab3k2nsOY&t=287s
Curl (Nabla/Del Operator): youtube.com/watch?v=hI4yTE8WT88
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - What even is an Electric Field?
1:50 - Vector Fields and how to represent them in component form
3:13 - Electric Field components can depend on ANY positional coordinate
4:56 - Maxwell's Equation
6:50 - The codependence of E-field components (when B is constant)
8:43 - The Special Rule!
10:05 - The physical significance of this rule
11:02 - Special shoutout to Squarespace for sponsoring this video!
11:59 - Let me know what to discuss in future videos :)
#ad - This video was sponsored by Squarespace.
Here's a silly video about approximations. It's not meant to be taken seriously, but some of the content in it is definitely worth checking out if you don't know about it already!
For example, the small angle approximation is briefly mentioned, which is where for very small angles, the value of sine and tangent are approximately equal to the angle itself. Similarly, for cosine, we use a truncated version of its Taylor series in the small angle approximation.
This approximation is immensely helpful in approximating solutions to difficult systems in physics. For example, a differential equation showing how pendulums behave, even in a very basic system, contains a sine term that makes the equation difficult to solve. However with the small angle approximation, the equation becomes very similar to the simple harmonic motion equation, which we do know how to solve.
So although approximations don't give us exact answers, they are extremely useful in solving problems we wouldn't otherwise know how to tackle.
... and then there's good old π = 3. We don't talk about that.
The music in the background is (again) something I threw together just for this video. I will post a playthrough of it over on my second channel in the next few days - link to my channel is below!
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
In quantum mechanics, probability can flow through space and time, in exactly the same way as a fluid does!
It's worth recalling that in quantum mechanics, a system is described by its wave function - the mathematical function that contains all the information we can know about the system. And when we take the square modulus of the wave function, this can be used as a sort of probability density.
In other words, if we find the area under the wave function squared graph, between two points in space, then we calculate the probability of finding (e.g.) a particle between those two points in space. This can be extended to three dimensions, so the square modulus of the wave function can be integrated over a particular volume of space to give the likelihood of the particle being found in that volume.
The wave function of any system changes over time according to the Schrodinger equation. This can mean the wave function can simply "move" through space, hence the probability of finding our particle at different points in space can change, and the probability "flows" through space. Or a more complicated version is when the shape of the wave function changes. Either way, this results in a change in probability over time, which can be described as a probability flow.
The thing is though, the "flow" of probability through space is not some abstract concept - we can actually calculate this, and it turns out that the continuity equation describes this flow. The continuity equation is otherwise used to describe the flow of real fluids (i.e. liquids like water and juice, as well as gases), so it's almost surprising that probability in quantum mechanics follows the same equation.
The continuity equation looks at the density of a flowing quantity (whether that's a fluid or probability), and more specifically studies the rate of change of that density. In addition to this, it also looks at the divergence of the fluid or probability current. we take a brief look at this in the video, but for a more complete explanation check out the videos linked below.
In a nutshell, the equation considers an object flowing into a region of space, and equates this to the amount stored in the region + the amount leaving the region. This makes intuitive sense, but is only true if we consider the object flowing in to be a conserved quantity. For example, a real fluid must have a conserved amount of mass, meaning it cannot be created or destroyed in the region of space we are considering. Similarly, total probability is conserved anyway since the sum of all possibilities must always be 100%.
Videos linked in the cards for this video:
Wave Functions - youtube.com/watch?v=w9Kyz5y_TPw
Schrodinger Equation - youtube.com/watch?v=BFTxP03H13k&t=234s
Nabla / Del - youtube.com/watch?v=hI4yTE8WT88
Continuity Equation - youtube.com/watch?v=eR-LrWfrXl8
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Probability Can Flow (Believe It Or Not)
0:33 - Wave Functions and Probability
1:59 - The Schrodinger Equation and Probability Flow
3:07 - Sponsor Message - Click the Link Below to Calculate Your Carbon Footprint!
4:52 - The Continuity Equation for Probability Flow
6:35 - The Continuity Equation for Fluids
7:04 - Interpreting the Continuity Equation for a Region of Space
#ad - This video was Sponsored by Wren.
So what do we do in the scenario where our teacher forgets to include this constant? Why we remind them of course! We let them know very firmly and clearly that the integral is indefinite, meaning we have not specified the limits, so we obviously need to add the constant of integration after completing our problem.
Why is the constant even needed at all? Well it's because integration, or at least indefinite integration, does not provide unique solutions. The easiest way to understand this, in my opinion, is to think of the reverse scenario - let's imagine we want to differentiate a function like x^2. We know the derivative of this is 2x. But similarly, the derivative of x^2 + (any constant) is also 2x. Therefore when we integrate 2x our result could be x^2 + any constant. The "solution" we've found is technically representing a family of solutions! And the only way to find the value of our constant c is to define the limits of the integral - this is enough to uniquely identify a solution.
This is one of the first things we learn when studying calculus, and it should be (rightly) drilled into us as early as possible. However teachers are human too, and sometimes they forget to add the constant of integration. In that situation, students should always make sure to remind them :)
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
In this video I wanted to take a look at how we build up our mathematical representation (or at least one of them) of quantum mechanical spin. To do this, we'll start by looking at the spin of an electron, and understanding what it is.
In quantum mechanics, spin is the inherent angular momentum a particle / system has. It does not gain this angular momentum by moving along an angular (curved) path or spinning in some way - the particle just behaves as if it has angular momentum! Any extra angular momentum it gains as a result of its motion is added to the spin of the particle. Spin is a particle property, just like charge or mass.
With electrons, which are "spin-(1/2)" particles, we know that a measurement of its spin along a particular direction (e.g. z-direction) will result in us finding the electron in a "spin up" or "spin down" state. What this actually means is that the size of the electron's spin angular momentum is the same in both cases (i.e. same spin speed). But for spin up the electron behaves as if it's rotating counterclockwise around the axis, and for spin down it's clockwise. We just represent these spins with arrows pointing in the direction (up) or against (down) the axis for simplicity.
Any quantum system, like our electron, can be represented by a wave function. This wave function contains all the information we can know about the electron, such as what state it's in and the probability of finding a given spin state when we next make a measurement on it.
If we want to find out any information about a system, we have to make a measurement on it. One such example is trying to find the spin of our electron along the z direction. Another example is trying to find the particle's momentum in a given direction.
Taking a measurement is mathematically represented by a "measurement operator" being applied to the system's wave function. If the system is already in a nice "eigenstate", or a state that is one of the possible measurement results of our measurement, then making the measurement will not change the system state. In addition to this, the eigenvalue equation tells us the actual value we will measure in the experiment - in this case, the size of the spin of the electron.
If the system is not in an eigenstate, then a measurement will cause the wave function to "collapse" into one of the possible measurement results. The probability of the system collapsing into a particular state can be calculated from the wave function as it was before we made the measurement. This also links to the concept of superposition, since any quantum state can be written as some superposition of the measurement results of any measurement.
As we see in this video, a quantum state (such as the spin up state we could find our particle in) can be easily represented with a vector. And measurement operators can be represented by matrices. Then we can use the rules of linear algebra to see how measurement operators can be applied to a quantum system. We can also use the usual rules of matrix transformations to work out measurement operators in other directions (e.g. x- and y-directions).
We also see how the measurement matrices used to represent the spin measurements in x-, y-, and z-directions are very close to the Pauli matrices that crop up often when discussing spin-(1/2) particles. Lastly, we see how to construct bigger vectors and matrices for systems where there are more than two possible measurement results - it's just easiest to start with two-state systems like the spin up and spin down states of an electron.
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Spin: Conceptually Hard, Mathematically Easy(ish)
2:50 - Measurement Operators (i.e. the Math Showing How to Measure a System)
3:50 - Mathematical Representation of Spin Wave Functions (as Vectors)
5:29 - Representing Measurement Operators as Matrices in Linear Algebra
6:40 - The Wave Function Collapses Depending on Our Chosen Measurement!
8:17 - Quantum Superposition (Blend) of Different States
9:29 - The Pauli Matrices
9:55 - Constructing Bigger Vectors and Matrices
On this channel, I've not properly discussed rotational kinetic energy. In addition to this, I've often referred to linear kinetic energy as simply "kinetic energy". So today we're setting the record straight.
Any object with angular motion (e.g. moving along a curved path or spinning about an axis) will have angular kinetic energy. This energy depends on the speed at which the object spins, the mass of the object, and the shape of the object too.
The angular speed of our object (i.e. magnitude of angular velocity) simply measures the angle the rotation covers in a unit of time. For example, our object could be moving at 45 degrees per second, or pi/4 radians per second. Generally we prefer to use radians as our angular unit.
The moment of inertia of our object accounts for both the mass and the shape of the object. It can be calculated by taking a very small chunk of mass making up the object, and multiplying this by the square of the perpendicular distance between the chunk of mass and the axis of rotation. Then we do this for all chunks of mass making up the object, and add up all these contributions.
In other words, the moment of inertia of an object can be found by calculating the integral of the square of the distance between the mass and the rotation axis, with respect to the mass of the object. Interestingly this tells us that two objects may have the same external shape, but if one is hollow and the other is solid / filled, they will have different moments of inertia. Also, an object will have a different moment of inertia depending on what axis we intend to spin it about!
We can therefore think of the moment of inertia as a measure of our object's resistance to angular motion. Or it's a measure of how much torque is needed to have our object experience a given angular acceleration. This is similar to how an object's mass is a measure of its resistance to linear motion, or how much force is needed for a specific linear acceleration.
In general there are many similarities between moment of inertia (angular) and mass (linear), or angular speed and linear speed, or other angular and linear quantities. For our solid spherical ball, the moment of inertia is given by (2/5)MR^2 where M is the mass of the sphere and R is its radius.
And the rotational kinetic energy of any object is given by finding (1/2)Iw^2 where I is the moment of inertia, and w (or omega) is the angular speed. This equation is similar to the linear kinetic energy equation, (1/2)mv^2. Both quantities are energies, so are measured in joules.
For our specific foam ball, we find its mass and radius by measuring these the usual way, and the angular speed by putting a dot on the ball and timing how long it takes to complete one full oscillation. Then we can combine all this information to work out how much rotational kinetic energy it has as a result of spinning. We also work out the speed the ball would move with if it had an equivalent amount of linear kinetic energy!
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Rotational Kinetic Energy
1:44 - The Equation, Moment of Inertia, and Angular Speed / Velocity
2:04 - Understanding Angular Velocity
3:19 - Moment of Inertia - How Hard is it to Spin a Ball?
In this video, we will take a look at what is known as the wave equation. In reality, there are a few different equations in physics (even in classical physics) that describe wave behavior, but the one we will look at describes the most basic classical waves, and is thus known as THE wave equation. It describes classical waves such as sound waves, electromagnetic waves, and water waves. The wave equation is a second-order partial differential equation, and in this video we will take a look at the 1 dimensional version.
The equation itself says that the second order partial derivative with respect to time, of the displacement of the wave medium, is equal to the square of the wave speed multiplied by the second order partial derivative of the displacement with respect to our spatial direction. In the video we see how this is represented by all the symbols. We also understand differentiation (and derivatives) as taking the gradient of our u function (displacement) at every point. The partial derivatives ensure we keep other variables constant.
Solving the wave equation just means finding a function for u (displacement of the wave medium) that satisfies the equation. Beyond this, there are many possible solutions. The most basic one we usually study is a sinusoidal solution, both in time and in space. We will look at the mathematical form of this kind of solution. It's also interesting to note that many different kinds of sinusoid (i.e. with different amplitudes, frequencies, and phases) are allowed as solutions to the equation. These solutions are generally found using some tedious algebraic methods, such as separation of variables - very interesting mathematically, but not quite our focus as physicists.
The wave equation is what is known as a linear equation. Therefore, by the Principle of Superposition, any two solutions can be added together to find another solution. If we reverse this logic, we can say that complicated waves that are not necessarily sinusoidal in nature can be broken down into a sum of component sine waves, meaning they must be allowed solutions to the wave equation due to its linearity.
An excellent example of this is when two identical waves travel in opposite directions towards each other. The resultant wave (what is seen when these waves overlap) is known as a standing wave. It appears to not travel in either direction, but rather just oscillate between zero amplitude and maximum amplitude at the same region in space. The standing wave is another solution to the wave equation as it is made of two simpler solutions (the two waves travelling in opposite directions).
And lastly, we see that there is a very boring and trivial solution to the wave equation, which is u = 0. This represents there not being a wave in the region of space and time that we are studying, and easily fits the wave equation, which in this case becomes 0 = 0. However, this solution is very important. Because if it did not solve the wave equation, then this would be indicating that the wave equation does not permit any region of space and time where a wave does not exist. This would be a problem as the wave equation would then not be a good model of our real universe.
Here are some useful resources for understanding how we actually go about solving the wave equation, mathematically speaking:
youtube.com/watch?v=EJLympg3XMM
personal.math.ubc.ca/~feldman/m267/separation.pdf
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - Understanding The Wave Equation in 1 Dimension
2:10 - Second Order Partial Derivatives Explained
3:43 - What Does it Mean to "Solve" the Wave Equation?
4:41 - What Do Basic Solutions Look Like?
5:08 - The Linearity of the Wave Equation (and Principle of Superposition)
6:55 - The Most Boring (and Most Important) Solution
The Dirac Delta Function (named after Paul Dirac) or Unit Impulse function can be thought of as a spike. The value of the function is zero everywhere, except for at one particular value, where the function goes to infinity (i.e. its value is undefined). In this video, we look at how despite nothing in the real world behaving like this, we use the delta function to model various phenomena in theoretical physics.
Firstly, we define what the delta function looks like. Then we look at a couple of interesting mathematical properties of the function. One of these properties is that the integral of the delta function (which gives the area between the function and the horizontal axis) is equal to 1. This is a strange concept - how can the area under an infinitesimally thin, infinitely tall function be a finite value, and why is it specifically defined to be 1? Read up more about the function here: en.wikipedia.org/wiki/Dirac_delta_function and here https://tutorial.math.lamar.edu/classes/de/diracdeltafunction.aspx
We also see how the integral of a function corresponds to the area between that function and the horizontal axis, for those of us that are unfamiliar with this idea.
Secondly, we see that a delta function can be "moved" so the spike is at a different x position, in a similar way to how other functions are translated. If f(x) is centered on 0, then f(x-a) is centered on a.
This allows us to discover another property of the delta function - it can be used to pick out values of functions at specific points. For example, the integral of the product between a sine function and a delta function centered at a, is given by the value of the sine function at a. This is a remarkable property that allows us to encode many ideas in physics.
Firstly, we can treat particles as being point masses and charges. In reality their mass and charge are distributed over finite regions of space, but they're so small compared to us that we can very nearly pretend the masses and charges are concentrated at one infinitely small point. In this video we see how charged particles can be represented as point charges using the Dirac delta function. We take a function representing the charge (i.e. the magnitude of the charge on the particle), and multiply it by the delta function to give us the charge density. This way we can integrate the charge density to give us charge, while also encoding information about the position of the particle via the delta function.
Additionally, the delta function can be used to localize objects in time, as well as in space. This is often done when studying impulses (i.e. forces applied to objects for very short periods of time). An example is a footballer kicking a football - the force is exerted for a very short time. In that case, a delta function with time on the horizontal axis can be used to localize the force exerted on the ball to a particular instant in time!
So in summary, the Dirac Delta Function is a physically impossible but mathematically essential function (that's not really a function). it helps us greatly simplify many different ideas in theoretical physics.
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Timestamps:
0:00 - The Dirac Delta Function - What Does It Look Like?
1:48 - Mathematical Property: The Area Under the Delta Function is 1?!
3:24 - Translating the Function, and Using It to Pick out Function Values
4:26 - Uses in Theoretical Physics - Representing Point Charges
9:06 - Impulses - Localizing in Time Rather Than in Space
10:03 - Summarizing the Impossible (But Essential) Function
In this video, we will look at the Principle of Superposition. It explains why when two waves overlap, we can simply add their displacements at each point to find the resultant wave. This is what we would find reasonable due to our common sense. But common sense isn't always correct - so why is it accurate here?
To understand this, we need to realize that the wave equation (the classical governing equation for describing all sorts of waves) is linear in wave displacement. In other words, the displacement of a wave (u) only appears as a single factor of u everywhere in the equation. No other powers of u, no functions of u such as ln(u). This linearity ensures that if we take any two known solutions of the wave equation, then adding them together produces yet another solution to the wave equation - in this case the resultant wave. However, if the wave equation was nonlinear (i.e. had powers of u other than 1), then this would not be the case. The sum of two existing solutions would NOT be a solution in itself.
Luckily, the universe seems to behave linearly very often, and any linear system can use the principle of superposition to find solutions that are formed by summing other existing solutions. As a result, wave interference becomes an easy-to-understand topic, but this has a much deeper reason than common sense!
The Principle of Superposition actually has a lot more mathematical detail and rigor behind it. Linear systems that follow the Principle of Superposition can be defined in terms of two properties: the linear functions must be additive and homogeneous. In other words, the function of two variables added together must be equal to the sum of the functions of the two individual variables (additivity), and a constant multiplied by the function of a variable must be equal to the function of the constant multiplied by the variable.
For more information on the Superposition Principle, as well as an idea of other systems where this applies, check out the following link: en.wikipedia.org/wiki/Superposition_principle
Many of you have asked about the stuff I use to make my videos, so I'm posting some affiliate links here! I make a small commission if you make a purchase through these links.
A Quantum Physics Book I Enjoy: amzn.to/3sxLlgL
My Camera (Sony A6400): amzn.to/2SjZzWq
ND Filter: amzn.to/3qoGwHk
Microphone and Stand (Fifine): amzn.to/2OwyWvt
Gorillapod Tripod: amzn.to/3wQ0L2Q
Thanks so much for watching - please do check out my socials here:
Instagram - @parthvlogs
Patreon - patreon.com/parthg
Music Chanel - Parth G's Shenanigans
Merch - parth-gs-merch-stand.creator-spring.com