Hard Light Productions Forums
Off-Topic Discussion => General Discussion => Topic started by: QuantumDelta on September 05, 2010, 06:08:02 pm
-
http://www.physorg.com/news202921592.html
Emphasis on may of course.
-
The article isn't specific about it, so here's my question: do said variations follow a Gaussian distribution?
-
Considering we've only recently gained the ability to observe the fine details of the universe, I don't find this too surprising. I think we're going to find out constants themselves aren't really constant.
Of course, IANAP.
-
The article isn't specific about it, so here's my question: do said variations follow a Gaussian distribution?
Eh? Well if you mean that observed values of alpha would disperse on a scale according to the bell curve, that'd be expected, but I didn't see any actual data on the article.
The more interesting thing is that the distribution of different measured alpha values is not even around the observed universe - for some reason, the values measured from one hemisphere (Earth's northern hemisphere) are different from the ones measured from southern hemisphere.
Assumedly there is some average/mean/median value around which the measurements are centered, but I don't know if that sort of statistics are very relevant in this case.
-
They would be relevant if scientists prove that variations are regular, follow a Gaussian distribution and are therefore symmetric or partially symmetric.
-
I don't think the issue is settled, but I believe the last cosmological data I read suggested the universe was flat, open and infinite.
-
last i heard they were talking about that inverted sphere thing.
-
They would be relevant if scientists prove that variations are regular, follow a Gaussian distribution and are therefore symmetric or partially symmetric.
Yes, but the measurements are unsymmetric to begin with.
Lets take an example: If you analyze the (absolute) brightness of, say, a hundred random stars distributed evenly throughout the sky.
Now, you would end up with a set of measurements where you would have a gaussian distribution for the brightness of the stars, there would be a certain average brightness for the stars, and the distribution of stars of different brightnesses would also be even around the sky.
This measurement of alpha is equal to finding that northern hemisphere has dimmer stars and southern hemisphere has brighter stars.*
This measurement is unsymmetric to begin with, and even if there's a certain "average" alpha value around which the measurements are centered on a gauss curve, it doesn't do anything to remove the fact that there are spatially unsymmetric measurements of alpha waiting for an explanation.
I wouldn't even be so amazed if it weren't for the fact that one half of observable universe seems to have a different alpha value than the other half. I could understand evenly distributed changes in the fine structure constant - that would still support the generally homogenous and isotropic universe - but I can't figure out any obvious reason why it would be divided like that.
*Of course I'm ignoring here that the location of Milky Way will throw off our example in pretty much exactly the described way, but if you would just look at stars within our spiral arm, within certain distance so that the different star distributions on different parts of the Milky Way wouldn't disturb the measurements, you'd get a pretty even distribution of brightnesses for the stars used in the experiment.
-
other possibility, either Keck or VLT is, or was during the times of the observations, slightly out of tune.
-
other possibility, either Keck or VLT is, or was during the times of the observations, slightly out of tune.
Systematic error would probably be the more likely culprit for the dualistic division of the alpha values.
-
The idea of non-uniform constants is appealing because it provides a solution to that irritating anthropic principle paradox.
It also may increase the probability that Superman exists in a flat open infinite universe away from the current 0, moving it towards Batman's 1.
Superman will still never be as cool as Batman.
-
[tr0ll] so possibly... there's a region in the universe where the "Difficulty in learning OpenGL/GLSL" constant is non-infinite?? [/tr0ll]
Anyway... I'm a complete noob when it comes to scientific matters compared to Battman or Herra, but why would variance in physical constants imply variance in physical laws? Is there a concrete reason why this "alpha" (which I've never heard of btw, so this is all wild mass guessing) should be constant everywhere when, say, the speed of light isn't (in a non-vacuum)?
-
other possibility, either Keck or VLT is, or was during the times of the observations, slightly out of tune.
Systematic error would probably be the more likely culprit for the dualistic division of the alpha values.
Well yeah, but that's until you consider tha--ALL GLORY TO THE HYPNOCAT!
-
[tr0ll] so possibly... there's a region in the universe where the "Difficulty in learning OpenGL/GLSL" constant is non-infinite?? [/tr0ll]
Anyway... I'm a complete noob when it comes to scientific matters compared to Battman or Herra, but why would variance in physical constants imply variance in physical laws? Is there a concrete reason why this "alpha" (which I've never heard of btw, so this is all wild mass guessing) should be constant everywhere when, say, the speed of light isn't (in a non-vacuum)?
The speed of light in a non-vacuum is constant and equal to the speed of light in a vacuum. The speed of light never changes.
When you hear people talking about the speed of light changing in various materials, what they actually mean is that the photons run into various atoms and get absorbed, then re-emitted a while later. The actual speed of the photons is still C; they simply spend some of the overall travel time not existing.
-
Am I the only person who reads the actual journal entry for these things? :p
For The Brave of Heart (http://arxiv.org/PS_cache/arxiv/pdf/1008/1008.3907v1.pdf)
More to discuss if/when I figure out wtf this all means.
Edit: Okay, after reading more, I'm starting to think this is actually quite legit. To begin with, the idea of variation over time/space of the fine structure constant, alpha, is not new, as pointed out previously. Although many observations have been done on earth (the Oklo phenomenon, various laboratory experiments involving radioactive decay and atomic spectra, etc) that resulted in no clear evidence of variation, those tests simply were not sensitive enough. Astrophysical observations of quasars, on the other hand, allow us to look over a *much* greater range of time, and that's where we see the discontinuity of alpha.
Back to this publication, if we assume these results are valid then this is quite a discovery, because it shows us not only that alpha seems to change over time, but also has a preferred *direction* in which it changes. This is not only important for things like String Theory and other physical models of the universe, but it also solves the anthropic principle -- it suggests our "universe" might be part of a much larger system/structure with varying conditions, and so we just occupy one small part of it that is habitable. Just like how we occupy one small habitable planet out of many uninhabitable ones. Hardly surprising.
I also wonder if this feeds back into the "Cosmology with torsion - universe birthed within a black hole?" thing posted a while back? Remember how that theory suggested that because most black holes rotate, then if our universe was birthed in the manner this theory suggests then we should expect to see a "preferred direction". I might be looking too deeply into it though, I don't know.
Anyways, cool story bro and I wish to see further observations on this in the future. Hopefully with more telescopes and a bigger set of quasars.
-
Why is it that every LaTeX-created journal article looks equally ass-ugly? :p
Hmm...well this is interesting stuff. Way too early to draw any firm conclusions about these individual results, but interesting regardless.
-
other possibility, either Keck or VLT is, or was during the times of the observations, slightly out of tune.
I applaud the thought of questioning/checking the accuracy of the instruments, but first let's note the following things :)
- *both* telescopes gave a nonzero result.
- the observations were discontinuous and made over the course of several months.
- A detailed analysis of errors is included in the publication, as is customary.
If this finding was due to an inaccuracy of the scopes, I'd imagine we'd have noticed said inaccuracy by now from other research done with them.
-
I think the easiest thing to do is try the experiment again with more telescopes.
-
[tr0ll] so possibly... there's a region in the universe where the "Difficulty in learning OpenGL/GLSL" constant is non-infinite?? [/tr0ll]
Anyway... I'm a complete noob when it comes to scientific matters compared to Battman or Herra, but why would variance in physical constants imply variance in physical laws? Is there a concrete reason why this "alpha" (which I've never heard of btw, so this is all wild mass guessing) should be constant everywhere when, say, the speed of light isn't (in a non-vacuum)?
The speed of light in a non-vacuum is constant and equal to the speed of light in a vacuum. The speed of light never changes.
When you hear people talking about the speed of light changing in various materials, what they actually mean is that the photons run into various atoms and get absorbed, then re-emitted a while later. The actual speed of the photons is still C; they simply spend some of the overall travel time not existing.
Yeah, all photons while traveling move at c... non-vacuum medium however affects the group velocity of EM wave motion (even though individual quanta always travel at c through the intermediary vacuum!)
The group velocity of electromagnetic wave motion depends on the permittivity and the permeability of the medium. Vacuum has certain discrete values for these (electric constant aka. permittivity of vacuum and magnetic constant aka. permeability of vacuum). In vacuum, the velocity of EM-radiation is defined from Maxwellian equations as
c = 1 / (ε0 μ0)½
where ε0 is the permittivity of vacuum and μ0 is the permeability of vacuum.
Basically, as far as electromagnetic wave motion is concerned, vacuum is not nothing, as in, it impedes electric and magnetic fields at certain level - otherwise, if these values were zero, you can see that the speed of light in vacuum would be infinite and world would be, eh, quite a different place.
Now, the reason why this has any relevance to the topic is that the fine structure constant is tied to many different so-perceived constants of the nature - vacuum's permeability and permittivity being two of them (almost all the rest appear as well). To be specific, fine structure constant can be defined in following ways:
α = e2 / (4 π ε0) ħ c
or
α = e2 c μ0 / 2 h
or
α = ke e2 / ħ c
...and the sharp-eyed of you might notice that the two notations are somewhat circular as they use both speed of light and/or electric/magnetic constant, making the definition a tad bit circular since speed of light depends on electric and magnetic constants. The third one is less so. The symbols used in these equations are:
e = elementary charge
ħ = "h-bar", reduced Planck constant (defined as h / 2π ) - h is obviously the Planck constant
c = speed of light in vacuum
ε0 = electric constant (permittivity of vacuum)
μ0 = magnetic constant (permeability of vacuum)
ke = Coulomb constant
This is a bit of a mouthful to use in equations, of course, so most of the time electrostatic cgs units are used, where the Coulomb constant is 1 and dimensionless, and then the electroc constant can be abbreviated as
α = e2 / ħ c
...and by now if you have any working knowledge of physics you should see that since the fine structure constant is basically a glue that ties pretty much all the natural constants (save gravitational constant, though it's more of a part of Newtonian mechanics - general relativity uses metric tensors to resolve the curvature of space and resulting gravitational interactions, so its' a bit different than Newton's point source gravity fields interacting with each other...) together, and change in the fine structure constant could result (or cause) a change in any or all constants related to it.
This includes such basic stuff as charge of an electron and proton, the resulting attracting or separating forces, speed of light, the frequency of photons with certain energy (E = hf) and pretty much all equations of particle physics and cosmology as a result.
TL;DR - fine structure constant pretty much defines the, well, fine structure of the vacuum.
If vacuum's properties change, everything in it changes as well. Including speed of light and almost all of quantum physics in general.
-
in other words if we can find a way to manipulate it we can rewrite the laws of physics to our liking.
well... sort of...
-
? Not sure what you mean.
Excellent post btw, Herra. :yes:
-
Oh boy, another article with an extremely dumbed-down title!
-
Oh boy, another article with an extremely dumbed-down title!
It is a mark of your peculiar brand of intellect that you require the meaningful information in the thread to be presented to you in the title.
-
http://news.stanford.edu/news/2010/august/sun-082310.html
Interesting.
-
Oh boy, another article with an extremely dumbed-down title!
It is a mark of your peculiar brand of intellect that you require the meaningful information in the thread to be presented to you in the title.
Hey, at least I'm not making false assumptions about what the content of the article says based on it this time! Instead, I'm just not going to bother reading it, and I'm going to be cynical about it all the while :p
-
other possibility, either Keck or VLT is, or was during the times of the observations, slightly out of tune.
I applaud the thought of questioning/checking the accuracy of the instruments, but first let's note the following things :)
- *both* telescopes gave a nonzero result.
- the observations were discontinuous and made over the course of several months.
- A detailed analysis of errors is included in the publication, as is customary.
If this finding was due to an inaccuracy of the scopes, I'd imagine we'd have noticed said inaccuracy by now from other research done with them.
It's actually pretty likely, because we HAVE noted inaccuracy in both of them. Keck's and the VLT's adaptive optics setup involves deforming the mirror to eliminate distortion and has had a number of unintentional consequences that have reduced its accuracy, focus, and overall usefulness for very deep sky objects. It's a mechanical problem and would require stripping down and rebuilding the scopes. The 36-meter scope will probably fix the problem, but until then...
-
It's actually pretty likely, because we HAVE noted inaccuracy in both of them. Keck's and the VLT's adaptive optics setup involves deforming the mirror to eliminate distortion and has had a number of unintentional consequences that have reduced its accuracy, focus, and overall usefulness for very deep sky objects. It's a mechanical problem and would require stripping down and rebuilding the scopes. The 36-meter scope will probably fix the problem, but until then...
Me being a researcher in related field in Optics, I have always been a little bit curious about seeing cancellation with adaptive optics. I know I'm nitpicking about the terms, but it doesn't remove distortion as it is pretty much constant and doesn't depend on temperature that much. It removes atmospheric seeing effect (statistically at least). But, due to mirror being supported from a couple of hundred points, it makes me wonder about the cancellation/introduction of astigmatism and perhaps coma - and the exact location of the image. Anyways if you have stuff about finding the error, I'd be delighted to read it.
Makes me wonder if the theoretical people are going a bit too fast if they published stuff of changing alpha instead of checking the telescope first.
-
And about the observed sun and radioactive decay effect: I say more people to test this stuff. If confirmed, I don't know what would be implications of that, but it would be interesting.
Actually, more people to test that changing alpha stuff also. Especially with different telescopes.
-
Oh boy, another article with an extremely dumbed-down title!
It is a mark of your peculiar brand of intellect that you require the meaningful information in the thread to be presented to you in the title.
Hey, at least I'm not making false assumptions about what the content of the article says based on it this time! Instead, I'm just not going to bother reading it, and I'm going to be cynical about it all the while :p
If you can't read past the title of an article, then don't post about it. We're having an interesting discussion here.
-
Oh boy, another article with an extremely dumbed-down title!
It is a mark of your peculiar brand of intellect that you require the meaningful information in the thread to be presented to you in the title.
Hey, at least I'm not making false assumptions about what the content of the article says based on it this time! Instead, I'm just not going to bother reading it, and I'm going to be cynical about it all the while :p
If you can't read past the title of an article, then don't post about it. We're having an interesting discussion here.
He probably spends time on Slashdot.
-
Anyways if you have stuff about finding the error, I'd be delighted to read it.
I'll ask around a bit, but it's mainly what I've heard with my semi-docency at Palomar Observatory. As I gather it has to do with the fact that Keck and the VLT, which both use individually actuated and deformed 1-meter mirror sections rather than a single solid mirror, have difficulty pointing the mirror properly at the instruments with the adaptive optics engaged. Apparently there have been nights where Keck simply can't get the mirror to point at all while using the adaptive optics.
Palomar, with a single solid mirror, has it easier in getting the scope mirrors to properly point at the instruments. Why lessons from one are not applicable to the other is something you would know better than I.
-
I'll ask around a bit, but it's mainly what I've heard with my semi-docency at Palomar Observatory. As I gather it has to do with the fact that Keck and the VLT, which both use individually actuated and deformed 1-meter mirror sections rather than a single solid mirror, have difficulty pointing the mirror properly at the instruments with the adaptive optics engaged. Apparently there have been nights where Keck simply can't get the mirror to point at all while using the adaptive optics.
Palomar, with a single solid mirror, has it easier in getting the scope mirrors to properly point at the instruments. Why lessons from one are not applicable to the other is something you would know better than I.
If you have something more, I'd like to read it. I'm starting to have hutch about why does behave as it does. The telescope consists of segmented mirror parts that are planar, but small enough that the larger curved surface can be approximated with them accurately enough, isn't it? The problem with segmentation is that each mirror is free to move within the limits imposed by the support structure, and small differences in the surface normal direction might cause large deviations in the spot location after a dozen meters. With a single mirror that is deforming, the supporting structure cannot cause as large deviations as the mirror surface itself is still connected to the neighboring support points.
-
If you have something more, I'd like to read it. I'm starting to have hutch about why does behave as it does. The telescope consists of segmented mirror parts that are planar, but small enough that the larger curved surface can be approximated with them accurately enough, isn't it?
No, the parts of the mirrors are parabolic and they are guided to direct light into a single focus, and they are individually actuated to counter small enough atmospheric refraction changes in the image during long exposure, as far as I understand the point of adaptive optics.
But I suspect the term "planar" was from a bad choice of words rather than being uninformed about the geometry of the mirror surface itself, yes?
The problem with segmentation is that each mirror is free to move within the limits imposed by the support structure, and small differences in the surface normal direction might cause large deviations in the spot location after a dozen meters. With a single mirror that is deforming, the supporting structure cannot cause as large deviations as the mirror surface itself is still connected to the neighboring support points.
Deformation, and difficulty in synching the mirrors to focus within required precision. It's a challenge from the perspective of the machinery as well, not just the deforming of the mirrors while they are being actuated continuously.
-
No, the parts of the mirrors are parabolic and they are guided to direct light into a single focus, and they are individually actuated to counter small enough atmospheric refraction changes in the image during long exposure, as far as I understand the point of adaptive optics.
But I suspect the term "planar" was from a bad choice of words rather than being uninformed about the geometry of the mirror surface itself, yes?
Actually no. I said planar, which is actually due to being uninformed about the telescope. And I have no problems about confessing that. Though I had my reasons, the first being the thought never occurred that somebody would try to do adjustments for a piece that has not only curvature, but changing curvature. Why is this a nasty thing? Principal ray path is not well defined in that case (yes you can put it in easily to a computer but to do it in real life is completely different). So it sounds like asking for trouble to me at least because then all the adjustments need to be even more accurate - and along all five or possibly even six axes! If anyone of you guys have ever tried to align a system by giving it movement along all six degrees of freedom, you know what I'm talking about. After that I made a mental note of designing optics with as little amount of adjustments as possible.
Second reason is that I recall reading about plans to make a telescope with individually adjustable planar segments and assumed this to be it, but I guess I mixed the telescopes. I guess the other one worked at different wavelength for starters.
EDIT^2: After correcting the typos, I still find typos. Bad day for writing English, or that the brain is secretly pondering something else.
-
Due to the low ratio between their thickness and their diameter, the VLT primary mirrors will be rather flexible and sensitive to various disturbances, requiring permanent control of their optical shape.
Active optics consists in applying controlled forces to the primary mirror and in moving the secondary mirror in order to cancel out the errors. The scheme was developped by ESO for the 3.5-m New Technology Telescope (NTT) and is now applied to the VLT. The system must essentially compensate for static or slowly varying deformations such as manufacturing errors, thermal effects, low frequency components of wind buffeting, telescope inclination, ... It is also used when changing between Cassegrain and Nasmyth foci.
The mirror blanks are produced by spin-casting. The process (figure 3) starts with the casting of approximately 45 tons of glassy Zerodur into a concave mold. Thereafter the mold is transported onto a rotating platform where it is spun until solidification. When the temperature has decreased to about 800 ºC and the viscosity is such that the blank will retain its meniscus shape, it is brought into an annealing furnace where it is cooled down to room temperature in about 3 months.
So, basically they are cast into paraboloid shape meniscus to be specific) and then polished, but require support to stay in optically correct shape, which also makes the active optics possible.
So yeah there are a lot of variables, not just the active optics commands but general stability upkeep. (http://en.wikipedia.org/wiki/Meniscus)
-
I just realized an additional thing: it is a paraboloid mirror. Had it been spherical, that would give some leeway as spherical surface can be considered degenerate for decenter and tilt, but with a paraboloid one cannot do that. The other thing is that off-axis paraboloids tend to have a rapidly decreasing imaging performance with increased field angles. While spherical surfaces don't give as good theoretical imaging performance, they tend to do a lot better tolerance wise.
I'd hate to be the guy who designs the adjustment routines for that thing.
-
I just realized an additional thing: it is a paraboloid mirror. Had it been spherical, that would give some leeway as spherical surface can be considered degenerate for decenter and tilt, but with a paraboloid one cannot do that. The other thing is that off-axis paraboloids tend to have a rapidly decreasing imaging performance with increased field angles. While spherical surfaces don't give as good theoretical imaging performance, they tend to do a lot better tolerance wise.
I'd hate to be the guy who designs the adjustment routines for that thing.
The main mirror in VLT (well, all the four VLT units) is one piece. Keck telescopes have segmented primary mirrors. Sadly, I couldn't find as accurate descriptions about the Keck's structure and optics as I could for the VLT. Here's some stuff; http://keckobservatory.org/about/mirror/ (http://keckobservatory.org/about/mirror/)
-
The mirror blanks are produced by spin-casting. The process (figure 3) starts with the casting of approximately 45 tons of glassy Zerodur into a concave mold. Thereafter the mold is transported onto a rotating platform where it is spun until solidification. When the temperature has decreased to about 800 ºC and the viscosity is such that the blank will retain its meniscus shape, it is brought into an annealing furnace where it is cooled down to room temperature in about 3 months.
Meniscus in this context doesn't mean what you think it means (or at least I think so). Take a look at here (http://en.wikipedia.org/wiki/Lens_(optics)#Lens_construction) at the picture where they show the the effect of the bending factor for a single lens, titled "types of single lens". Meniscus lens is usually considered to be a lens which has the same sign for the radius of curvature (and curvature cannot be INF) on both surfaces. I don't think they have made the bottom of the molding cup flat, but I could be wrong.
But man, talk about colossal lens. Usual diameter limit is around 2 meters, after which glass starts to lose its shape due to gravity. Astronomical projects always tend to challenge current technology, and this one is at its best. No wonder Schott is doing that!
-
The main mirror in VLT (well, all the four VLT units) is one piece. Keck telescopes have segmented primary mirrors. Sadly, I couldn't find as accurate descriptions about the Keck's structure and optics as I could for the VLT. Here's some stuff; http://keckobservatory.org/about/mirror/
It is surprisingly hard to find accurate descriptions of those things. VLT seems to have a rather good documentation available for everybody. So hats of to Schott. Though I would still be interested in hearing the "that didn't work" parts, which tend to teach you best. In both manufacturing and design. I wonder how they did the tolerance analysis of the design in that VLT case.
Bah, I missed a lot of Painkiller playing due to this thread. Now off to that. Can't think optics all the time, after all.