AI Notes: Difference between revisions

From canSAS
(Created page with "<big>'''Notes from AI/ML in SAS Topical Presentation/Discussion'''</big><br><br> __TOC__ == Notes Start here ==")
 
No edit summary
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
<big>'''Notes from AI/ML in SAS Topical Presentation/Discussion'''</big><br><br>
<big>'''Notes from canSAS 2024 AI/ML in SAS Topical Presentation/Discussion'''</big><br><br>


__TOC__
__TOC__
* Talk
** AI is pervasive                                                                             
*** The most impactful AI applications will be outside he facility wall, these mostly demand good, reduced data                                                                                                               
** Decision ready data                                                                                                       
*** bespoke GUI reduction is unnecessarily user-unfriendly, AND prevents future data-intensive work                                                                                                                           
** Accessible documented API
*** Magic cable story
*** We should more regularly support outside decision engines as the 'center of the universe' with the instrument as a subordinate worker   


== Notes Start here ==
* Discussion
** Andrew Jackson
*** Accessible documented API is good, public internet is not going to happen. All instruments should be able to be ‘plugged into’
** Brian Pauw
*** Has solution for rolling nexus/NxCansas into scicat
** Jan Ilavsky
*** Instrument control from open internet is not going to happen; decision is in wrong hands
*** Why not just use EPICS?
**** A: We’ve done that.
*** People hate tiled/bluesky
** Unknown person
*** How do you make sure the AI doesn’t break your instrument or put the instrument in a bad state?
**** A: This is a problem with the instrument. If the instrument can be put in that state, it’s the instruments problem
** Unknown Person
*** How to preserve privacy of data for use in ML/AI applications?
**** A: Good important question, no easy answers
** Adrian Rennie
*** Comment about “center of the universe”. Should be broader. All instruments should be switchable between these two views
*** How to get the users integrated into the decision making?
**** A: Human-machine interfaces is important. Code and efforts exists (Tsuchinoko)
** Brian Pauw
*** We should support and push for instruments that aren’t the center of the universe
** Tanny Chavez
*** Uncertainty quantification?
**** A: Yes, very important

Latest revision as of 17:51, 4 December 2024

Notes from canSAS 2024 AI/ML in SAS Topical Presentation/Discussion

  • Talk
    • AI is pervasive
      • The most impactful AI applications will be outside he facility wall, these mostly demand good, reduced data
    • Decision ready data
      • bespoke GUI reduction is unnecessarily user-unfriendly, AND prevents future data-intensive work
    • Accessible documented API
      • Magic cable story
      • We should more regularly support outside decision engines as the 'center of the universe' with the instrument as a subordinate worker
  • Discussion
    • Andrew Jackson
      • Accessible documented API is good, public internet is not going to happen. All instruments should be able to be ‘plugged into’
    • Brian Pauw
      • Has solution for rolling nexus/NxCansas into scicat
    • Jan Ilavsky
      • Instrument control from open internet is not going to happen; decision is in wrong hands
      • Why not just use EPICS?
        • A: We’ve done that.
      • People hate tiled/bluesky
    • Unknown person
      • How do you make sure the AI doesn’t break your instrument or put the instrument in a bad state?
        • A: This is a problem with the instrument. If the instrument can be put in that state, it’s the instruments problem
    • Unknown Person
      • How to preserve privacy of data for use in ML/AI applications?
        • A: Good important question, no easy answers
    • Adrian Rennie
      • Comment about “center of the universe”. Should be broader. All instruments should be switchable between these two views
      • How to get the users integrated into the decision making?
        • A: Human-machine interfaces is important. Code and efforts exists (Tsuchinoko)
    • Brian Pauw
      • We should support and push for instruments that aren’t the center of the universe
    • Tanny Chavez
      • Uncertainty quantification?
        • A: Yes, very important