/Normalization: Difference between revisions

From canSAS
No edit summary
No edit summary
 
(One intermediate revision by one other user not shown)
Line 18: Line 18:
* Regarding the question "can magnetic references help" - this can not generally be done, as changing the layer system would impact on the sld, the background, the normalization, etc.
* Regarding the question "can magnetic references help" - this can not generally be done, as changing the layer system would impact on the sld, the background, the normalization, etc.
* Does the "correct normalization" only get critical if the background level is that high that changing the background in the simulation would change the simulated sld? Is that the same in NR and GISANS?
* Does the "correct normalization" only get critical if the background level is that high that changing the background in the simulation would change the simulated sld? Is that the same in NR and GISANS?
* What can you get from Q=0 in a GISANS measurement? Can one at all extract a quantitave solution from a GISANS measurement? Should rather we aim at always having proper reference systems that the cross section can be compared with for getting the physical parameters needed?
* What can you get from Q=0 in a GISANS measurement? Can one at all extract a quantitative solution from a GISANS measurement? Should rather we aim at always having proper reference systems that the cross section can be compared with for getting the physical parameters needed?


Notes group (?):
* Instrumental effects, one might need to worry about a few things. Normalisation in GI is very challenging. It would be nice for people to say what they have done but not sure what the best practise would be.
* Regarding data reduction:
** The approaches are not uniform if one is measuring the direct beam for normalisation monitoring. One has to put in corrections for efficiency etc. In SAS you are not concerned by normalising to the beam intensity but try to evaluate the differential scattering cross-section. This will be related to the amount of sample in the beam and is putting a constraint on models. If the sample is a sphere in a medium of something else one has to define the cross-section. In GI one can not identify the differential cross-section.
** Not obvious that in GISANS experiments everyone measures the incident beam intensity in the same way. A reference sample is very difficult and not the same as for SAS experiments. Many of the detectors will be damaged by the direct beam, at least it will not come linearly. One needs to know the WL distribution for the ToF instruments, the WL can be not the same as for the direct beam. This becomes something one has to define rather carefully. This is true for reflectometry as well.
** In the reduced file the raw counts are lost. People say the reduction should estimate the statistical uncertainty but it is not easy. The uncertainty for the measured intensity, one has to keep the footprint, are the slits reproducible? It can be one of the least precise things. 2 um on 20 um is 10% error. If one has very few counts it is a problem of statistics. One may have zeros in bins, one rebins to have 10-15 counts in bins… For GISANS there is not much beam. Along with the counts one needs to provide other sources of uncertainty. Sometimes people throw away a lot of information.
* Regarding NR comparison:
** In NR we can often get few counts and they turn zero after background subtraction, they datapoints disappear after taking a log. With few counts one can instead of subtracting the background fit the data+background to the total signal. One can measure background with the same poisson statistics. One has to measure with the same statistics and not just randomly subtract.
** Oversubtraction of background can drastically change the fitting result in NR. Understanding errors is complicated. Statistically correct way of doing things in terms of making a model, numerically it is unstable and has a lot of problems, fitting is tricky. Least squares is stable, other things are less so.
** The ultracontemporary methods, AI? This is a topic for tomorrow.
Provide training datasets have to be sufficiently similar to the experiment you are doing. To train you have to give not the dataset but also teach what it is representing.


******
******
******
******

Latest revision as of 08:14, 7 April 2026

Short summary from discussions/breakouts:

  • Full quantitative normalization procedure:
    • measuring GISANS together with NR & Off-specular scattering, to view the whole Q-space.
  • If this is not feasible:
    • Same as topic "Background": each system has to be classified for its way "how to be treated" - Survey needed!
  • General aspects (sample independent):
    • Precise I(lambda) normalization of the beamlines has to be performed

Notes:

  • for normalization it should be taken into account hand-in-hand: Specular reflectivity measured and simulated via Parrat and Off-specular/GISANS measured and simulated via DWBA!
    • Can NR help as a "reference" to normalize the GISANS measurement correctly - should it always go together?
    • This is not possible for some of the existing beamlines where GISANS can be performed but NR not, especially for Monochromatic sources!
    • At Tof-GISANS beamlines one could aim at getting the "NR" by measuring GISANS at different incident angles
  • Another problem: we loose information on the background and on the correct normalization factor by the fact that we do not measure the whole real space anyhow, as the detector has finite size
  • there are two different problems:
    • a fully quantitative measurement and
    • a reference to "1", where it would depend on the sample and the physics to be measured how to normalize (e.g., superconducting systems: with ref. via the T>Tc state, magnetic systems: with reference via the saturated state) BUT: this reference to "another sample state" is not for all systems possible. What then?
  • Regarding the question "can magnetic references help" - this can not generally be done, as changing the layer system would impact on the sld, the background, the normalization, etc.
  • Does the "correct normalization" only get critical if the background level is that high that changing the background in the simulation would change the simulated sld? Is that the same in NR and GISANS?
  • What can you get from Q=0 in a GISANS measurement? Can one at all extract a quantitative solution from a GISANS measurement? Should rather we aim at always having proper reference systems that the cross section can be compared with for getting the physical parameters needed?

Notes group (?):

  • Instrumental effects, one might need to worry about a few things. Normalisation in GI is very challenging. It would be nice for people to say what they have done but not sure what the best practise would be.
  • Regarding data reduction:
    • The approaches are not uniform if one is measuring the direct beam for normalisation monitoring. One has to put in corrections for efficiency etc. In SAS you are not concerned by normalising to the beam intensity but try to evaluate the differential scattering cross-section. This will be related to the amount of sample in the beam and is putting a constraint on models. If the sample is a sphere in a medium of something else one has to define the cross-section. In GI one can not identify the differential cross-section.
    • Not obvious that in GISANS experiments everyone measures the incident beam intensity in the same way. A reference sample is very difficult and not the same as for SAS experiments. Many of the detectors will be damaged by the direct beam, at least it will not come linearly. One needs to know the WL distribution for the ToF instruments, the WL can be not the same as for the direct beam. This becomes something one has to define rather carefully. This is true for reflectometry as well.
    • In the reduced file the raw counts are lost. People say the reduction should estimate the statistical uncertainty but it is not easy. The uncertainty for the measured intensity, one has to keep the footprint, are the slits reproducible? It can be one of the least precise things. 2 um on 20 um is 10% error. If one has very few counts it is a problem of statistics. One may have zeros in bins, one rebins to have 10-15 counts in bins… For GISANS there is not much beam. Along with the counts one needs to provide other sources of uncertainty. Sometimes people throw away a lot of information.
  • Regarding NR comparison:
    • In NR we can often get few counts and they turn zero after background subtraction, they datapoints disappear after taking a log. With few counts one can instead of subtracting the background fit the data+background to the total signal. One can measure background with the same poisson statistics. One has to measure with the same statistics and not just randomly subtract.
    • Oversubtraction of background can drastically change the fitting result in NR. Understanding errors is complicated. Statistically correct way of doing things in terms of making a model, numerically it is unstable and has a lot of problems, fitting is tricky. Least squares is stable, other things are less so.
    • The ultracontemporary methods, AI? This is a topic for tomorrow.

Provide training datasets have to be sufficiently similar to the experiment you are doing. To train you have to give not the dataset but also teach what it is representing.