class: center, middle, inverse, title-slide .title[ # Generative AI and Statistical Practice ] .subtitle[ ## .scriptsize[Enhancing Quality Control, Education, and Task Reliability with Large Language Models] ] .author[ ### Ying-Ju Tessa Chen, PhD
Scholar
|
@ying-ju
|
ychen4@udayton.edu
Joint work with:
Fadel M. Megahed, PhD
Miami University
Allison Jones-Farmer, PhD
Miami University
Sven Knoth, PhD
Helmut-Schmidt-Universität
Younghwa (Gabe) Lee, PhD
Miami University
Douglas C. Montgomery, PhD
Arizona State University
Brooke Wang, PhD
Miami University
Inez Zwetsloot, PhD
University of Amsterdam
] .date[ ### December 11, 2024 | National Taipei University | Taiwan ] --- # Our Research Team <img src="figs/team.jpg" alt="Our Research Team" width="90%" style="display: block; margin: auto;" /> --- # The Road to Large Language Models <br> <img src="figs/generative_ai_chart.png" alt="From big data to big models, a flow chart documenting how we got to large language models" width="100%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Comment:** You have been hearing about **big data** in SPC for over a decade now. In fact, we presented our paper, [Statistical Perspectives on Big Data](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=ab40f392e653b7336cbebf7c4fb95d3988748282), almost exactly 11 years ago in the ISQC Workshop in Sydney. We now have models that can digest questions/prompts and generate answers based on more than 45TB of text. ] --- # Uniqueness of LLMs vs. Earlier AI Models .content-box-gray[ .bold[.red[LLMs:]] .bold[The objective is to generate new content rather than analyze existing data.] ] .font90[ - The generated content is based on a .bold[.red[stochastic behavior embedded in generative AI models such that the same input prompts results in different content]]. - LLMs with known model sizes can have up to **540 billion parameters** ([PaLM](https://arxiv.org/abs/2204.02311)). Note that state-of-the-art models like *GPT-4o*, *PaLM 2* and *Claude Sonnet 3.5* **have not revealed their model sizes**. - With the increase in model size, researchers have observed the **“emergent abilities”** of LLMs, which were **not explicitly encoded in the training**. [Examples include](https://ai.googleblog.com/2022/11/characterizing-emergent-phenomena-in.html): + Multi-step arithmetic, and + taking college-level exams. - LLMs are **foundation models** (see [Bommasani et al. 2021](https://arxiv.org/abs/2108.07258)), large pre-trained AI systems that can be **repurposed with minimal effort across numerous domains and diverse tasks.** ] --- # Generative AI Hype (2023) <img src="figs/mckinsey_ai.png" width="60%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Image Source:** [McKinsey & Company (July 2023). The economic potential of generative AI: The next productivity frontier](https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#/) ] --- # Generative AI Hype (2024) .pull-left[ <img src="figs/google_ai.png" width="90%" style="display: block; margin: auto;" /> .center[ .font80[Andrew McAfee (2024). [Generally Faster: The Economic Impact of Generative AI](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/Generally_Faster_-_The_Economic_Impact_of_Generative_AI.pdf)] ] ] .pull-right[ <img src="figs/gen_ai_coders_sept_2024.jpeg" width="100%" style="display: block; margin: auto;" /> .center[.font80[Cui et al. (2024). [SSRN 4945566](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566)]] ] --- # 🤦🏻♀ But also Our Experience in October of 2024 <img src="figs/rock_paper_scissors.png" width="58%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Source:** [Playing Rock Paper Scissors with ChatGPT 4o on October 23, 2024.](https://chatgpt.com/share/6718eab4-5ebc-800a-a521-c25a136947b9) **Credit:** Fadel M. Megahed ] --- class: inverse, center, middle # On the Use of LLMs, such as ChatGPT, in SQC <br> .pull-left-2[<br>Megahed, F. M., Chen, Y. J., Ferris, J. A., Knoth, S., & Jones-Farmer, L. A. (2024). How generative AI models such as ChatGPT can be (mis)used in SPC practice, education, and research? An exploratory study. *Quality Engineering*, 36(2), 287–315. [Freely available @ [arXiv](https://arxiv.org/pdf/2302.10916.pdf)].] .pull-right-2[<div><img src="figs/paper_qr_code.png" class="paper-img" width="300px" align="right"></div>] --- # Our Overarching Research Question .content-box-red[ .bold[What can generative LLM-based AI tools do now to augment the roles of SPC practitioners, educators, and researchers?] ] - **Secondary goal:** To motivate the SPC community to be receptive to exploring whether new AI tools can help them be more **efficient**, **productive**, and **innovative**. This is consistent with: + Box and Woodall ([2012](https://www.tandfonline.com/doi/10.1080/08982112.2012.627003)): “we stress the necessity for the quality engineering community to strengthen and promote its role in **innovation**”, and + Hockman and Jensen ([2016](https://www.tandfonline.com/doi/10.1080/08982112.2015.1083107)): “for statisticians to be successful in leading innovation, they will need to strengthen their **skills beyond what they have traditionally needed in the past**, but we believe this will be worth the effort”. - **Scope:** We evaluated the utility of ChatGPT (GPT-3.5 engine) as of its *Jan 30, 2023 Version*. --- # Our Study Design <center> <img src="./figs/methods.png" alt="An overview of our study design, where we focused on three applications code, explanation, and knowledge generation for each application domain of practice, learning, and research. Red color is used to highlight the questions that will be discussed in the presentation." width="68%" height="68%" border="0" style="padding:0px; display: block; line-height: 0px; font-size: 0px; border:0px;" /> </center> --- # The Good: Knowledge Generation .bold[Inspired by the TEDxBoston talk titled [what we learned from 5 million books](https://www.ted.com/talks/jean_baptiste_michel_erez_lieberman_aiden_what_we_learned_from_5_million_books?language=en), we asked ChatGPT the following question:] <br> > .bold[.large["What are open issues in statistical process control research?'']] <br> ### Why this question seemed like a reasonable prompt? .bold[ChatGPT likely “read” and “can recall” more SPC research papers than most of us] --- # The Good: Knowledge Generation <img src="figs/research_prompt_08_fig_01.png" alt="Chat GPT highlighted six areas where there are open issues in statistical process control. We will highlight the main themes in the next slide" width="60%" style="display: block; margin: auto;" /> --- # The Good: Knowledge Generation .content-box-red[ .center[.bold[.large[Some Thoughts on the ChatGPT Answer]]] - It captured .bold[reasonable themes, e.g., ] + incorporating .bold[big data and machine learning] techniques, + .bold[online/real-time monitoring] solutions where 100% sampling is employed, + the need for .bold[non-normality], and + .bold[applications to new domains]. - In our opinion, .bold[value is in using it as a high-level tool for idea generation/validation]. - Potentially .bold[“stale”] as [Chat(GPT)-3.5 “finished training in early 2022”](https://openai.com/blog/chatgpt/) and is limited to [data up to Sept 2021](https://community.openai.com/t/knowledge-cutoff-date-of-september-2021/66215). + Probably not an issue for future LLM generations (.bold[Why?]) ] --- # The Bad: Precise Definitions <img src="figs/research_prompt_05_fig_01.png" alt="ChatGPT's generated response for our prompt of explain the practitioner-to-practitioner variability. Its response is somewhat long and imprecise. Specifically, ChatGPT presented five factors, which share a common feature; all deal with differences on the method level, i.e., chart type, subgroup design, techniques to calculate the limits, dealing with outliers, and choice of software. While we agree that these factors are important and will drive different results, ChatGPT's answer ignores the context for which the practitioner-to-practitioner variability is used in the SPC literature. In fact, the practitioner-to-practitioner variability refers to the variation that occurs with a fixed configuration of the five aforementioned factors, i.e., the variation results from multiple implementations of the same procedure on the same data-generating process." width="44%" style="display: block; margin: auto;" /> --- # The Ugly: ChatGPT's Hallucination .bold[To detect whether ChatGPT can detect erroneous requests, we asked:] <br> > .bold[.large["Can you use the ‘bigfish' dataset from the qcc library in R to create a control chart?'']] <br> ### Why this question seemed like a reasonable prompt? .bold[In an earlier question (within the same thread), ChatGPT answered a question by using the `qcc` package, i.e., is .red[familiar with it], and .red[detecting unreasonable requests would be a strong feature for non-expert users].] --- # The Ugly: ChatGPT's Hallucination <img src="figs/practice_prompt_03_fig_01.png" alt="The ChatGPT hallucination, answering a question about a non-existent dataset in the qcc library" width="56%" style="display: block; margin: auto;" /> --- # The Ugly: ChatGPT's Hallucination <img src="figs/practice_prompt_03_fig_02.png" alt="ChatGPT making up details about the non-existent bigfish dataset and saying it is popular in the SPC community" width="80%" style="display: block; margin: auto;" /> --- # The Ugly: ChatGPT's Hallucination (GPT-4o) <div style="margin-top: -10px;"> <center> <video style="width: 100%; max-height: 63vh;" controls id="myVideo"> <source src="figs/bigfish.mp4" type="video/mp4"> </video> </center> </div> <script> var video = document.getElementById('myVideo'); video.addEventListener('loadedmetadata', function() { video.currentTime = 11; video.playbackRate = 1.25; }, false); </script> .footnote[ <html> <hr> </html> **Note:** This trial was performed solely for our presentation. The model used herein should be a **much improved model** compared to the 3.5-model examined in the original paper. Yet, the **hallucination** has remained. ] --- class: inverse, center, middle # ChatSQC: Our Grounded App, to address Imprecise SQC Answers and Hallucinations .pull-left-2[<br>Megahed, F. M., Chen, Y. J., Zwetsloot, I., Knoth, S., Montgomery, D.C., & Jones-Farmer, L. A. (2024). Introducing ChatSQC: Enhancing Statistical Quality Control with Augmented AI. *Journal of Quality Technology*, 56(5), 474-497. [Freely available @ [arXiv](https://arxiv.org/pdf/2308.13550)]. <br> Jones-Farmer, L. A., Megahed, F. M., Chen, Y. J., Zwetsloot, I., Knoth, S., Montgomery, D. C., & Capizzi, G. (2024). [Editorial advice for selecting an open-source license for your next paper: Navigating copyrights for publicly facing AI chatbots.](https://www.tandfonline.com/doi/full/10.1080/00224065.2024.2391682) *Journal of Quality Technology*, 56(5), 468-473. ] .pull-right-2[<br> <div><img src="figs/paper2_qr_code.png" class="paper-img" width="300px" align="right"></div> ] --- # The Construction of ChatSQC <img src="figs/ChatSQC_flowchart_new.png" alt="The construction of ChatSQC involved four main phases: (a) a one-time extraction of the reference material, (b) a one-time preprocessing of the extracted material, (c) a continuous (online) chat inference, and (d) the hosting/deployment of the app on a web server." width="80%" style="display: block; margin: auto;" /> --- # A Live Demo of ChatSQC <center> <a href="https://chatsqc.osc.edu/"> <img alt="The interface to our ChatSQC app" src="figs/chatsqc_demo.png" style="width:90%; height:90%;"> </a> </center> .footnote[ <html> <hr> </html> **Note:** We encourage the audience to experiment with **ChatSQC** at <https://chatsqc.osc.edu/>. ] --- class: inverse, center, middle # ChatISA: Our In-House Bot for Students <br> .pull-left-2[<br>Megahed, F. M., Chen, Y. J., Ferris, J.A., Resatar, C., Ross, K., Lee, Y., & Jones-Farmer, L. A. (2024). ChatISA: A Prompt-Engineered Chatbot for Coding, Project Management, Interview and Exam Preparation Activities. Under review. [Freely available @ [arXiv](https://arxiv.org/abs/2407.15010)].] .pull-right-2[<div><img src="figs/paper3_qr_code.png" class="paper-img" width="300px" align="right"></div>] --- # A Live Demo of ChatISA <center> <a href="https://chatisa.fsb.miamioh.edu/"> <img alt="The interface to our ChatISA app" src="figs/chatisa_demo.gif" style="width:80%; height:80%;"> </a> </center> .footnote[ <html> <hr> </html> **Note:** We encourage the audience to experiment with **ChatISA** at <https://chatisa.fsb.miamioh.edu/>. If we have time, we can also go over [this pre-recorded and sped-up demo of the Exam Ally module](https://www.loom.com/share/239950fad0e24ef1875e8d5fb35cbe60). ] --- class: inverse, center, middle # How Can Industrial Statistics Inform LLM Usage and Evaluation? Some Initial Thoughts <br> <br>Megahed, F. M., Chen, Y. J., Jones-Farmer, L. A., Knoth, S., Lee, Y., Montgomery, D.C., & Wang, B., Zwetsloot, I. (2024). Work In Progress. --- # LLM Usage in Business and Industry <img src="figs/use_cases.svg" width="100%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Created By:** Fadel Megahed based on the text in the article by Ava McCartney. (2024). "When Not to Use Generative AI", *Gartner*. The article was published on April 23, 2024 and last accessed on June 14, 2024. It can be accessed at <https://www.gartner.com/en/articles/when-not-to-use-generative-ai>. ] --- # Current Research on LLM-based Classification <img src="figs/adjusted_paper_info.gif" width="70%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Note:** The paper by [Eisfeldt et al. (2023)](https://www.nber.org/papers/w31222) is what led to our collaboration with Brooke Wang and started our latest work in this area. ] --- # "Generative AI and Firm Values:" An Overview <img src="figs/andrea_paper.svg" width="100%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Note:** Our best attempt to summarize the work of Eisfeldt et al. (2023). "Generative AI and Firm Values". Available from <https://www.nber.org/papers/w31222>. Note that this chart was created by Fadel Megahed and that the authors randomly passed job-task pairings into ChatGPT and repeated this process 3 times to measure its consistency for classification. ] --- # "Generative AI and Firm Values:" Pairings .font70[ A random sample drawn from 58 occupation-task pairings, for **biostatisticians**, **statistical assistants**, and **statisticians**. ] .font70[ <table style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> occupation </th> <th style="text-align:left;"> task_description </th> <th style="text-align:left;"> gpt_label </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;font-weight: bold;"> Biostatisticians </td> <td style="text-align:left;"> Review clinical or other medical research protocols and recommend appropriate statistical analyses. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(254, 178, 76, 255) !important;">E2</span> </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> Biostatisticians </td> <td style="text-align:left;"> Monitor clinical trials or experiments to ensure adherence to established procedures or to verify the quality of data collected. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(169, 169, 169, 255) !important;">E0</span> </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> Biostatisticians </td> <td style="text-align:left;"> Design research studies in collaboration with physicians, life scientists, or other professionals. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(169, 169, 169, 255) !important;">E0</span> </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> Statistical Assistants </td> <td style="text-align:left;"> Code data prior to computer entry, using lists of codes. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(195, 20, 45, 255) !important;">E1</span> </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> Statistical Assistants </td> <td style="text-align:left;"> Enter data into computers for use in analyses or reports. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(195, 20, 45, 255) !important;">E1</span> </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> Statistical Assistants </td> <td style="text-align:left;"> Compile statistics from source materials, such as production or sales records, quality-control or test records, time sheets, or survey sheets. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(254, 178, 76, 255) !important;">E2</span> </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> Statisticians </td> <td style="text-align:left;"> Report results of statistical analyses, including information in the form of graphs, charts, and tables. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(254, 178, 76, 255) !important;">E2</span> </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> Statisticians </td> <td style="text-align:left;"> Develop and test experimental designs, sampling techniques, and analytical methods. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(169, 169, 169, 255) !important;">E0</span> </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> Statisticians </td> <td style="text-align:left;"> Evaluate sources of information to determine any limitations, in terms of reliability or usability. </td> <td style="text-align:left;font-weight: bold;"> <span style=" color: rgba(254, 178, 76, 255) !important;">E2</span> </td> </tr> </tbody> </table> ] .footnote[ <html> <hr> </html> **Note:** Using the rubric from [Eisfeldt et al. (2023)](https://www.nber.org/papers/w31222), we utilized GPT3.5-Turbo on March 24, 2024 to classify 19,281 occupation-task pairings per the request of our colleague [Brooke Wang](https://www.jiaweibrookewang.com/). ] --- # "Generative AI and Firm Values:" Consistency .font80[ > To validate the consistency and replicability of our procedure, we compare the scores assigned across **3 different GPT runs** ... for a **randomly selected subsample of 100 task statements**. > We compare the different sets of scores as follows: First, we construct **3 different classifications for each task** based on the assigned score: - **Current exposure:** score 1 has been assigned. - **Expected exposure:** Either score 1 or 2 has been assigned. - **Broad exposure:** Any score other than 0 has been assigned. > Then, we **compute the agreement** between different scoring runs with regard to which tasks belong in these categories. The comparison between different runs is shown (below). ] <img src="figs/eisfeld2023.png" width="50%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Source:** The quotes and the results in the table are from [Eisfeldt et al. (2023)](https://www.nber.org/papers/w31222). ] --- # Alternatively: Percent vs. Expected Agreement .font80[ We made 5,000 GPT3.5-Turbo API calls (1000 occupation-task pairings `\(\times\)` 5 replicates) and obtained: ] <img src="figs/gpt35_dist_label_conc.png" width="80%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Note:** This is also an **imperfect** approach since it assumes that all outcomes are equally likely/important. ] --- # Conjectures .font80[ > There is no knowledge without **theory** ... Experience teaches nothing without a **theory** ... Without theory you have nothing to revise, nothing to learn from ... **You have no way to use the information that comes to you.** -- [Deming (1993)](https://mitpress.mit.edu/9780262535939/the-new-economics-for-industry-government-education/) ] <br> .font80[ > Interestingly, **mathematics and statistics are perhaps the only disciplines that tend to equate "theory" with "mathematics."** > Biologists, geologists, and scientists in most other disciplines understand that **theory may or may not be mathematical in nature**. > [Madigan and Stuetzle](https://academiccommons.columbia.edu/doi/10.7916/D8ZG73DT/download) ... made this point: "The issues we raise above have nothing to do with the old distinction between applied statistics and theoretical statistics. The traditional viewpoint equates statistical theory with mathematics and hence with intellectual depth and rigor, but this misrepresents the notion of theory. We agree with the viewpoint that David Cox expressed at the 2002 NSF Workshop on the Future of Statistics that **'theory is primarily conceptual,' rather than mathematical**." ] .footnote[ <html> <hr> </html> **Source:** The quote (as well as the quotes and references within) are from: Hoerl, R.W. and Snee, R.D. (2010). [Moving the Statistics Profession Forward to the Next Level](https://www.tandfonline.com/doi/epdf/10.1198/tast.2010.09240). *The American Statistician*, 64(1), 10-14. ] --- background-image: url("figs/stevens_science_paper.png") background-position: right background-size: contain # On the Theory of <br/> Scales of Measurement .pull-left-2[ An initial and reasonable **starting point** for evaluating LLM output in task classification scenarios is to utilize the seminal work of Stanley Stevens. From here, we can think of: - What is the **data type** of the label? - How should **different data types** be evaluated from an **interrater reliability** perspective? ] .footnote[ <html> <hr> </html> **Source:** Stevens, S. S. (1946). [On the theory of scales of measurement](https://www.jstor.org/stable/pdf/1671815.pdf). Science, 103(2684), 677-680. ] --- # From Theory to LLM Practice: Initial Thoughts <img src="figs/llm_reliability_flowchart.svg" width="100%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Created By:** Fadel Megahed. This flowchart captures some ideas that we are currently investigating in this space. ] --- # Our Future Work in this Area 1. **Overarching research question:** - .bold[How can we rigorously evaluate the reliability of large language models (LLMs) both within a single model (intra-model) and across different models (inter-model)?] 2. **Considerations:** - .bold[Unequal costs associated with different model runs.] - .bold[Predetermine the number of runs, replicates, and other design of experiments (DoE) factors before starting the experiments.] - .bold[Consider different text labeling scenarios.] --- class: inverse, center, middle # Three Final Thoughts --- # 1. Keeping up with AI Developments is Hard!! <img src="figs/timeline_animation.gif" width="74%" style="display: block; margin: auto;" /> --- # 2. Use Cases Overlap with our Discpline!! <img src="figs/use_cases2.svg" width="100%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Created By:** Fadel Megahed based on the text in the article by Ava McCartney. (2024). "When Not to Use Generative AI", *Gartner*. The article was published on April 23, 2024 and last accessed on June 14, 2024. It can be accessed at <https://www.gartner.com/en/articles/when-not-to-use-generative-ai>. ] --- # 3. AI and Statistics: Perfect Together!! <img src="figs/ai_stats_perfect.png" width="60%" style="display: block; margin: auto;" /> .footnote[ <html> <hr> </html> **Source:** Redman, T.C., and Hoerl, R.W. (2024). "AI and Statistics: Perfect Together". *MIT Sloan Management Review*, available at <https://sloanreview.mit.edu/article/ai-and-statistics-perfect-together/>. ] --- ## Thank You! .pull-left[ - This presentation was created based on Dr. Fadel Megahed's presentation at [Statistische Woche](https://statistische-woche.de/en/startseite-en). Click [here](https://fmegahed.github.io/talks/statweek2024/stats_llm.html) to find the original presentation. - Please do not hesitate to contact me (Tessa Chen) at <a href="mailto:ychen@udayton.edu"><i class="fa fa-paper-plane fa-fw"></i> ychen4@udayton.edu</a> for questions or further discussions. ] .pull-right[ <img src="./figs/Tessa_grey_G.gif" width="60%" style="display: block; margin: auto;" /> ] --- class: center, middle, inverse, title-slide .title[ # Generative AI and Statistical Practice ] .substitle[ ### Enhancing Quality Control, Education, and Task Reliability with Large Language Models ] <br> .author[ ### Ying-Ju Tessa Chen, PhD <br>[<svg viewBox="0 0 488 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M488 261.8C488 403.3 391.1 504 248 504 110.8 504 0 393.2 0 256S110.8 8 248 8c66.8 0 123 24.5 166.3 64.9l-67.5 64.9C258.5 52.6 94.3 116.6 94.3 256c0 86.5 69.1 156.6 153.7 156.6 98.2 0 135-70.4 140.8-106.9H248v-85.3h236.1c2.3 12.7 3.9 24.9 3.9 41.4z"></path></svg> Scholar](https://scholar.google.com/citations?user=nfXnYKcAAAAJ&hl=en&oi=ao) | [<svg viewBox="0 0 496 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"></path></svg> @ying-ju](https://github.com/ying-ju) | [<svg viewBox="0 0 512 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M476 3.2L12.5 270.6c-18.1 10.4-15.8 35.6 2.2 43.2L121 358.4l287.3-253.2c5.5-4.9 13.3 2.6 8.6 8.3L176 407v80.5c0 23.6 28.5 32.9 42.5 15.8L282 426l124.6 52.2c14.2 6 30.4-2.9 33-18.2l72-432C515 7.8 493.3-6.8 476 3.2z"></path></svg> ychen4@udayton.edu](mailto:ychen4@udayton.edu)</br><br><u><b><font color="white">Joint work with:</b></u><br>Fadel M. Megahed, PhD [<svg viewBox="0 0 512 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M326.612 185.391c59.747 59.809 58.927 155.698.36 214.59-.11.12-.24.25-.36.37l-67.2 67.2c-59.27 59.27-155.699 59.262-214.96 0-59.27-59.26-59.27-155.7 0-214.96l37.106-37.106c9.84-9.84 26.786-3.3 27.294 10.606.648 17.722 3.826 35.527 9.69 52.721 1.986 5.822.567 12.262-3.783 16.612l-13.087 13.087c-28.026 28.026-28.905 73.66-1.155 101.96 28.024 28.579 74.086 28.749 102.325.51l67.2-67.19c28.191-28.191 28.073-73.757 0-101.83-3.701-3.694-7.429-6.564-10.341-8.569a16.037 16.037 0 0 1-6.947-12.606c-.396-10.567 3.348-21.456 11.698-29.806l21.054-21.055c5.521-5.521 14.182-6.199 20.584-1.731a152.482 152.482 0 0 1 20.522 17.197zM467.547 44.449c-59.261-59.262-155.69-59.27-214.96 0l-67.2 67.2c-.12.12-.25.25-.36.37-58.566 58.892-59.387 154.781.36 214.59a152.454 152.454 0 0 0 20.521 17.196c6.402 4.468 15.064 3.789 20.584-1.731l21.054-21.055c8.35-8.35 12.094-19.239 11.698-29.806a16.037 16.037 0 0 0-6.947-12.606c-2.912-2.005-6.64-4.875-10.341-8.569-28.073-28.073-28.191-73.639 0-101.83l67.2-67.19c28.239-28.239 74.3-28.069 102.325.51 27.75 28.3 26.872 73.934-1.155 101.96l-13.087 13.087c-4.35 4.35-5.769 10.79-3.783 16.612 5.864 17.194 9.042 34.999 9.69 52.721.509 13.906 17.454 20.446 27.294 10.606l37.106-37.106c59.271-59.259 59.271-155.699.001-214.959z"></path></svg> Miami University](https://miamioh.edu/fsb/directory/?up=/directory/megahefm)<br>Allison Jones-Farmer, PhD [<svg viewBox="0 0 512 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M326.612 185.391c59.747 59.809 58.927 155.698.36 214.59-.11.12-.24.25-.36.37l-67.2 67.2c-59.27 59.27-155.699 59.262-214.96 0-59.27-59.26-59.27-155.7 0-214.96l37.106-37.106c9.84-9.84 26.786-3.3 27.294 10.606.648 17.722 3.826 35.527 9.69 52.721 1.986 5.822.567 12.262-3.783 16.612l-13.087 13.087c-28.026 28.026-28.905 73.66-1.155 101.96 28.024 28.579 74.086 28.749 102.325.51l67.2-67.19c28.191-28.191 28.073-73.757 0-101.83-3.701-3.694-7.429-6.564-10.341-8.569a16.037 16.037 0 0 1-6.947-12.606c-.396-10.567 3.348-21.456 11.698-29.806l21.054-21.055c5.521-5.521 14.182-6.199 20.584-1.731a152.482 152.482 0 0 1 20.522 17.197zM467.547 44.449c-59.261-59.262-155.69-59.27-214.96 0l-67.2 67.2c-.12.12-.25.25-.36.37-58.566 58.892-59.387 154.781.36 214.59a152.454 152.454 0 0 0 20.521 17.196c6.402 4.468 15.064 3.789 20.584-1.731l21.054-21.055c8.35-8.35 12.094-19.239 11.698-29.806a16.037 16.037 0 0 0-6.947-12.606c-2.912-2.005-6.64-4.875-10.341-8.569-28.073-28.073-28.191-73.639 0-101.83l67.2-67.19c28.239-28.239 74.3-28.069 102.325.51 27.75 28.3 26.872 73.934-1.155 101.96l-13.087 13.087c-4.35 4.35-5.769 10.79-3.783 16.612 5.864 17.194 9.042 34.999 9.69 52.721.509 13.906 17.454 20.446 27.294 10.606l37.106-37.106c59.271-59.259 59.271-155.699.001-214.959z"></path></svg> Miami University](https://miamioh.edu/fsb/directory/?up=/directory/farmerl2)<br>Sven Knoth, PhD [<svg viewBox="0 0 512 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M326.612 185.391c59.747 59.809 58.927 155.698.36 214.59-.11.12-.24.25-.36.37l-67.2 67.2c-59.27 59.27-155.699 59.262-214.96 0-59.27-59.26-59.27-155.7 0-214.96l37.106-37.106c9.84-9.84 26.786-3.3 27.294 10.606.648 17.722 3.826 35.527 9.69 52.721 1.986 5.822.567 12.262-3.783 16.612l-13.087 13.087c-28.026 28.026-28.905 73.66-1.155 101.96 28.024 28.579 74.086 28.749 102.325.51l67.2-67.19c28.191-28.191 28.073-73.757 0-101.83-3.701-3.694-7.429-6.564-10.341-8.569a16.037 16.037 0 0 1-6.947-12.606c-.396-10.567 3.348-21.456 11.698-29.806l21.054-21.055c5.521-5.521 14.182-6.199 20.584-1.731a152.482 152.482 0 0 1 20.522 17.197zM467.547 44.449c-59.261-59.262-155.69-59.27-214.96 0l-67.2 67.2c-.12.12-.25.25-.36.37-58.566 58.892-59.387 154.781.36 214.59a152.454 152.454 0 0 0 20.521 17.196c6.402 4.468 15.064 3.789 20.584-1.731l21.054-21.055c8.35-8.35 12.094-19.239 11.698-29.806a16.037 16.037 0 0 0-6.947-12.606c-2.912-2.005-6.64-4.875-10.341-8.569-28.073-28.073-28.191-73.639 0-101.83l67.2-67.19c28.239-28.239 74.3-28.069 102.325.51 27.75 28.3 26.872 73.934-1.155 101.96l-13.087 13.087c-4.35 4.35-5.769 10.79-3.783 16.612 5.864 17.194 9.042 34.999 9.69 52.721.509 13.906 17.454 20.446 27.294 10.606l37.106-37.106c59.271-59.259 59.271-155.699.001-214.959z"></path></svg> Helmut-Schmidt-Universität](https://www.hsu-hh.de/compstat/en/sven-knoth-2)<br>Younghwa (Gabe) Lee, PhD [<svg viewBox="0 0 512 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M326.612 185.391c59.747 59.809 58.927 155.698.36 214.59-.11.12-.24.25-.36.37l-67.2 67.2c-59.27 59.27-155.699 59.262-214.96 0-59.27-59.26-59.27-155.7 0-214.96l37.106-37.106c9.84-9.84 26.786-3.3 27.294 10.606.648 17.722 3.826 35.527 9.69 52.721 1.986 5.822.567 12.262-3.783 16.612l-13.087 13.087c-28.026 28.026-28.905 73.66-1.155 101.96 28.024 28.579 74.086 28.749 102.325.51l67.2-67.19c28.191-28.191 28.073-73.757 0-101.83-3.701-3.694-7.429-6.564-10.341-8.569a16.037 16.037 0 0 1-6.947-12.606c-.396-10.567 3.348-21.456 11.698-29.806l21.054-21.055c5.521-5.521 14.182-6.199 20.584-1.731a152.482 152.482 0 0 1 20.522 17.197zM467.547 44.449c-59.261-59.262-155.69-59.27-214.96 0l-67.2 67.2c-.12.12-.25.25-.36.37-58.566 58.892-59.387 154.781.36 214.59a152.454 152.454 0 0 0 20.521 17.196c6.402 4.468 15.064 3.789 20.584-1.731l21.054-21.055c8.35-8.35 12.094-19.239 11.698-29.806a16.037 16.037 0 0 0-6.947-12.606c-2.912-2.005-6.64-4.875-10.341-8.569-28.073-28.073-28.191-73.639 0-101.83l67.2-67.19c28.239-28.239 74.3-28.069 102.325.51 27.75 28.3 26.872 73.934-1.155 101.96l-13.087 13.087c-4.35 4.35-5.769 10.79-3.783 16.612 5.864 17.194 9.042 34.999 9.69 52.721.509 13.906 17.454 20.446 27.294 10.606l37.106-37.106c59.271-59.259 59.271-155.699.001-214.959z"></path></svg> Miami University](https://miamioh.edu/fsb/directory/?up=/directory/leeyh2)<br>Douglas C. Montgomery, PhD [<svg viewBox="0 0 512 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M326.612 185.391c59.747 59.809 58.927 155.698.36 214.59-.11.12-.24.25-.36.37l-67.2 67.2c-59.27 59.27-155.699 59.262-214.96 0-59.27-59.26-59.27-155.7 0-214.96l37.106-37.106c9.84-9.84 26.786-3.3 27.294 10.606.648 17.722 3.826 35.527 9.69 52.721 1.986 5.822.567 12.262-3.783 16.612l-13.087 13.087c-28.026 28.026-28.905 73.66-1.155 101.96 28.024 28.579 74.086 28.749 102.325.51l67.2-67.19c28.191-28.191 28.073-73.757 0-101.83-3.701-3.694-7.429-6.564-10.341-8.569a16.037 16.037 0 0 1-6.947-12.606c-.396-10.567 3.348-21.456 11.698-29.806l21.054-21.055c5.521-5.521 14.182-6.199 20.584-1.731a152.482 152.482 0 0 1 20.522 17.197zM467.547 44.449c-59.261-59.262-155.69-59.27-214.96 0l-67.2 67.2c-.12.12-.25.25-.36.37-58.566 58.892-59.387 154.781.36 214.59a152.454 152.454 0 0 0 20.521 17.196c6.402 4.468 15.064 3.789 20.584-1.731l21.054-21.055c8.35-8.35 12.094-19.239 11.698-29.806a16.037 16.037 0 0 0-6.947-12.606c-2.912-2.005-6.64-4.875-10.341-8.569-28.073-28.073-28.191-73.639 0-101.83l67.2-67.19c28.239-28.239 74.3-28.069 102.325.51 27.75 28.3 26.872 73.934-1.155 101.96l-13.087 13.087c-4.35 4.35-5.769 10.79-3.783 16.612 5.864 17.194 9.042 34.999 9.69 52.721.509 13.906 17.454 20.446 27.294 10.606l37.106-37.106c59.271-59.259 59.271-155.699.001-214.959z"></path></svg> Arizona State University](https://search.asu.edu/profile/10123)<br>Brooke Wang, PhD [<svg viewBox="0 0 512 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M326.612 185.391c59.747 59.809 58.927 155.698.36 214.59-.11.12-.24.25-.36.37l-67.2 67.2c-59.27 59.27-155.699 59.262-214.96 0-59.27-59.26-59.27-155.7 0-214.96l37.106-37.106c9.84-9.84 26.786-3.3 27.294 10.606.648 17.722 3.826 35.527 9.69 52.721 1.986 5.822.567 12.262-3.783 16.612l-13.087 13.087c-28.026 28.026-28.905 73.66-1.155 101.96 28.024 28.579 74.086 28.749 102.325.51l67.2-67.19c28.191-28.191 28.073-73.757 0-101.83-3.701-3.694-7.429-6.564-10.341-8.569a16.037 16.037 0 0 1-6.947-12.606c-.396-10.567 3.348-21.456 11.698-29.806l21.054-21.055c5.521-5.521 14.182-6.199 20.584-1.731a152.482 152.482 0 0 1 20.522 17.197zM467.547 44.449c-59.261-59.262-155.69-59.27-214.96 0l-67.2 67.2c-.12.12-.25.25-.36.37-58.566 58.892-59.387 154.781.36 214.59a152.454 152.454 0 0 0 20.521 17.196c6.402 4.468 15.064 3.789 20.584-1.731l21.054-21.055c8.35-8.35 12.094-19.239 11.698-29.806a16.037 16.037 0 0 0-6.947-12.606c-2.912-2.005-6.64-4.875-10.341-8.569-28.073-28.073-28.191-73.639 0-101.83l67.2-67.19c28.239-28.239 74.3-28.069 102.325.51 27.75 28.3 26.872 73.934-1.155 101.96l-13.087 13.087c-4.35 4.35-5.769 10.79-3.783 16.612 5.864 17.194 9.042 34.999 9.69 52.721.509 13.906 17.454 20.446 27.294 10.606l37.106-37.106c59.271-59.259 59.271-155.699.001-214.959z"></path></svg> Miami University](https://miamioh.edu/fsb/directory/?up=/directory/wangj249)<br>Inez Zwetsloot, PhD [<svg viewBox="0 0 512 512" style="height:1em;position:relative;display:inline-block;top:.1em;fill:white;" xmlns="http://www.w3.org/2000/svg"> <path d="M326.612 185.391c59.747 59.809 58.927 155.698.36 214.59-.11.12-.24.25-.36.37l-67.2 67.2c-59.27 59.27-155.699 59.262-214.96 0-59.27-59.26-59.27-155.7 0-214.96l37.106-37.106c9.84-9.84 26.786-3.3 27.294 10.606.648 17.722 3.826 35.527 9.69 52.721 1.986 5.822.567 12.262-3.783 16.612l-13.087 13.087c-28.026 28.026-28.905 73.66-1.155 101.96 28.024 28.579 74.086 28.749 102.325.51l67.2-67.19c28.191-28.191 28.073-73.757 0-101.83-3.701-3.694-7.429-6.564-10.341-8.569a16.037 16.037 0 0 1-6.947-12.606c-.396-10.567 3.348-21.456 11.698-29.806l21.054-21.055c5.521-5.521 14.182-6.199 20.584-1.731a152.482 152.482 0 0 1 20.522 17.197zM467.547 44.449c-59.261-59.262-155.69-59.27-214.96 0l-67.2 67.2c-.12.12-.25.25-.36.37-58.566 58.892-59.387 154.781.36 214.59a152.454 152.454 0 0 0 20.521 17.196c6.402 4.468 15.064 3.789 20.584-1.731l21.054-21.055c8.35-8.35 12.094-19.239 11.698-29.806a16.037 16.037 0 0 0-6.947-12.606c-2.912-2.005-6.64-4.875-10.341-8.569-28.073-28.073-28.191-73.639 0-101.83l67.2-67.19c28.239-28.239 74.3-28.069 102.325.51 27.75 28.3 26.872 73.934-1.155 101.96l-13.087 13.087c-4.35 4.35-5.769 10.79-3.783 16.612 5.864 17.194 9.042 34.999 9.69 52.721.509 13.906 17.454 20.446 27.294 10.606l37.106-37.106c59.271-59.259 59.271-155.699.001-214.959z"></path></svg> University of Amsterdam](https://www.uva.nl/en/profile/z/w/i.m.zwetsloot/i.m.zwetsloot.html)<br><br/> ] .date[ ### December 11, 2024 | National Taipei University | Taiwan ]