/Normalization: Difference between revisions

From canSAS
No edit summary
No edit summary
 
(2 intermediate revisions by 2 users not shown)
Line 18: Line 18:
* Regarding the question "can magnetic references help" - this can not generally be done, as changing the layer system would impact on the sld, the background, the normalization, etc.
* Regarding the question "can magnetic references help" - this can not generally be done, as changing the layer system would impact on the sld, the background, the normalization, etc.
* Does the "correct normalization" only get critical if the background level is that high that changing the background in the simulation would change the simulated sld? Is that the same in NR and GISANS?
* Does the "correct normalization" only get critical if the background level is that high that changing the background in the simulation would change the simulated sld? Is that the same in NR and GISANS?
* What can you get from Q=0 in a GISANS measurement? Can one at all extract a quantitave solution from a GISANS measurement? Should rather we aim at always having proper reference systems that the cross section can be compared with for getting the physical parameters needed?
* What can you get from Q=0 in a GISANS measurement? Can one at all extract a quantitative solution from a GISANS measurement? Should rather we aim at always having proper reference systems that the cross section can be compared with for getting the physical parameters needed?


Notes group (?):
* Instrumental effects, one might need to worry about a few things. Normalisation in GI is very challenging. It would be nice for people to say what they have done but not sure what the best practise would be.
* Regarding data reduction:
** The approaches are not uniform if one is measuring the direct beam for normalisation monitoring. One has to put in corrections for efficiency etc. In SAS you are not concerned by normalising to the beam intensity but try to evaluate the differential scattering cross-section. This will be related to the amount of sample in the beam and is putting a constraint on models. If the sample is a sphere in a medium of something else one has to define the cross-section. In GI one can not identify the differential cross-section.


******
** Not obvious that in GISANS experiments everyone measures the incident beam intensity in the same way. A reference sample is very difficult and not the same as for SAS experiments. Many of the detectors will be damaged by the direct beam, at least it will not come linearly. One needs to know the WL distribution for the ToF instruments, the WL can be not the same as for the direct beam. This becomes something one has to define rather carefully. This is true for reflectometry as well.
******


Henrich
** In the reduced file the raw counts are lost. People say the reduction should estimate the statistical uncertainty but it is not easy. The uncertainty for the measured intensity, one has to keep the footprint, are the slits reproducible? It can be one of the least precise things. 2 um on 20 um is 10% error. If one has very few counts it is a problem of statistics. One may have zeros in bins, one rebins to have 10-15 counts in bins… For GISANS there is not much beam. Along with the counts one needs to provide other sources of uncertainty. Sometimes people throw away a lot of information.


Normalization
* Regarding NR comparison:
'''For normalization, important to know the footprint through reflectivity measurements'''.
** In NR we can often get few counts and they turn zero after background subtraction, they datapoints disappear after taking a log. With few counts one can instead of subtracting the background fit the data+background to the total signal. One can measure background with the same poisson statistics. One has to measure with the same statistics and not just randomly subtract.


Footprint correction (in reflectivity mode) should be best done before we do normalization
** Oversubtraction of background can drastically change the fitting result in NR. Understanding errors is complicated. Statistically correct way of doing things in terms of making a model, numerically it is unstable and has a lot of problems, fitting is tricky. Least squares is stable, other things are less so.


Si samples with known material, known SLD--> would that help? To have comparable geometry to work at all?
** The ultracontemporary methods, AI? This is a topic for tomorrow.
100nm etched 100 A wide roofs in Si and backfilled with Fe, to calculate expected intensity and calculate SLD that you have in the sample.
Provide training datasets have to be sufficiently similar to the experiment you are doing. To train you have to give not the dataset but also teach what it is representing.  


We do nano-etching with electron beam welding on the surface. Stripes should be fine, but we need to define orientation measurement, so it would be easier.
******
 
******
JF: Truncated rods can be difficult , because by subtle variations of angle, the scattering pattern changes drastically.
 
SJ: need to find out what can be done with chemical etching and electron beam etching. Have 1cm - 2 cm in all dimensions. If you work with microsample (square mm), footprint is way larger than whatever you have.
 
Reference sample should have same dimensions as real sample.
 
JK: Even if you have the reference samples, how do they help get the correct intensity?
 
PG: you double-check your data reduction, you check your calibration (features at same q-values), are absolute intensities at the same value?, to know how reproducible are. It is more for reproducibility.
 
 
PG: '''Normalization to direct beam, for GISANS we want to underilluminate (otherwise, we get lots of background)'''
 
JF: You measure difference in the film with contrast. If you do not know your footprint, you will not know your incident angle and the wavelength to the extend you want.
 
PG: All measurements we did with GISANS (large sample and small sample slits) we ensured underillumination.
 
PG: FIGARO --> ToF-GISANS was not easy.
 
SJ: More and more certain that we probably need one additional measurement:
- One reference sample
- One additional measurement with low intensity where we confine the beam very small, where footpirnt calculation becomes easier that we can use later to calibrate the measurement where maybe we can overilluminate
(JF: not sure if this will work. In your measurement where you overilluminate, e.g. resolution not that good, you measure GISANS over different depths.You do very different experiments. Might be useful but not simple scaling.)
 
SJ: for the moment, only simplistic approach. Without additional reflectometry, very hard to go to quantitative measurements.
 
 
AK: '''The approach of using the monitor will work, with sample big enough not to have overillumination problems'''.
PG: if it works, having GISANS with absolute intensity will be very valuable. The size of nanoparticle will define GISANS intensity.
 
 
PG: '''discussion about off-specular and cases where it influences direct beam and specular beam.'''
AK: you have troubles understanding where you define specular beam on the detector.
 
 
SJ: Reflectometry, we normalize to intensity of incident beam. We do specular reflectivity curve. Everything in plane is correlated, and everything not landing on specular beam is not collected.
 
PG: in case that you have such strong off-specular scattering, that you influence specular intensity. Boris has published couple of papers that he proposes to deal with this problem.
 
JK: To get all this info, you need to simultaneously fit spec reflectivity with off-spec and GISANS and
 
Simultaneously: exact same beam and sample,
 
'''Footprint correction (geometric considerations for absolute intensity)'''

Latest revision as of 08:14, 7 April 2026

Short summary from discussions/breakouts:

  • Full quantitative normalization procedure:
    • measuring GISANS together with NR & Off-specular scattering, to view the whole Q-space.
  • If this is not feasible:
    • Same as topic "Background": each system has to be classified for its way "how to be treated" - Survey needed!
  • General aspects (sample independent):
    • Precise I(lambda) normalization of the beamlines has to be performed

Notes:

  • for normalization it should be taken into account hand-in-hand: Specular reflectivity measured and simulated via Parrat and Off-specular/GISANS measured and simulated via DWBA!
    • Can NR help as a "reference" to normalize the GISANS measurement correctly - should it always go together?
    • This is not possible for some of the existing beamlines where GISANS can be performed but NR not, especially for Monochromatic sources!
    • At Tof-GISANS beamlines one could aim at getting the "NR" by measuring GISANS at different incident angles
  • Another problem: we loose information on the background and on the correct normalization factor by the fact that we do not measure the whole real space anyhow, as the detector has finite size
  • there are two different problems:
    • a fully quantitative measurement and
    • a reference to "1", where it would depend on the sample and the physics to be measured how to normalize (e.g., superconducting systems: with ref. via the T>Tc state, magnetic systems: with reference via the saturated state) BUT: this reference to "another sample state" is not for all systems possible. What then?
  • Regarding the question "can magnetic references help" - this can not generally be done, as changing the layer system would impact on the sld, the background, the normalization, etc.
  • Does the "correct normalization" only get critical if the background level is that high that changing the background in the simulation would change the simulated sld? Is that the same in NR and GISANS?
  • What can you get from Q=0 in a GISANS measurement? Can one at all extract a quantitative solution from a GISANS measurement? Should rather we aim at always having proper reference systems that the cross section can be compared with for getting the physical parameters needed?

Notes group (?):

  • Instrumental effects, one might need to worry about a few things. Normalisation in GI is very challenging. It would be nice for people to say what they have done but not sure what the best practise would be.
  • Regarding data reduction:
    • The approaches are not uniform if one is measuring the direct beam for normalisation monitoring. One has to put in corrections for efficiency etc. In SAS you are not concerned by normalising to the beam intensity but try to evaluate the differential scattering cross-section. This will be related to the amount of sample in the beam and is putting a constraint on models. If the sample is a sphere in a medium of something else one has to define the cross-section. In GI one can not identify the differential cross-section.
    • Not obvious that in GISANS experiments everyone measures the incident beam intensity in the same way. A reference sample is very difficult and not the same as for SAS experiments. Many of the detectors will be damaged by the direct beam, at least it will not come linearly. One needs to know the WL distribution for the ToF instruments, the WL can be not the same as for the direct beam. This becomes something one has to define rather carefully. This is true for reflectometry as well.
    • In the reduced file the raw counts are lost. People say the reduction should estimate the statistical uncertainty but it is not easy. The uncertainty for the measured intensity, one has to keep the footprint, are the slits reproducible? It can be one of the least precise things. 2 um on 20 um is 10% error. If one has very few counts it is a problem of statistics. One may have zeros in bins, one rebins to have 10-15 counts in bins… For GISANS there is not much beam. Along with the counts one needs to provide other sources of uncertainty. Sometimes people throw away a lot of information.
  • Regarding NR comparison:
    • In NR we can often get few counts and they turn zero after background subtraction, they datapoints disappear after taking a log. With few counts one can instead of subtracting the background fit the data+background to the total signal. One can measure background with the same poisson statistics. One has to measure with the same statistics and not just randomly subtract.
    • Oversubtraction of background can drastically change the fitting result in NR. Understanding errors is complicated. Statistically correct way of doing things in terms of making a model, numerically it is unstable and has a lot of problems, fitting is tricky. Least squares is stable, other things are less so.
    • The ultracontemporary methods, AI? This is a topic for tomorrow.

Provide training datasets have to be sufficiently similar to the experiment you are doing. To train you have to give not the dataset but also teach what it is representing.