The Story of FLUF
|
Author, Dr. Jennifer Parker, explains the original FLUF Test.
|
The FLUF(F) Test is a framework for critically evaluating content generated through artificial intelligence. The framework encourages the user to look critically at format, language, usability, and fanfare when considering the outputs of the AI generated results. AI generated results can be flowery or inaccurate – or repetitive for example. Using AI tools brings with it a responsibility to be educated about their power, usefulness, and shortcomings. In Fall of 2025, an additional domain, function, was added to account for the context or expertise of the reviewer. Traditionally, online search results have been critiqued by frameworks like SIFT (Caulfield, 2019); CARRDSS (Valenza, 2004); CRAAP (Blakeslee, 2004); and 5 Key Questions (Thoman & Jolls, 2003). However, these models fail to take into consideration AI prompt generation and re-prompting. Prior to the FLUF Test (Parker, 2023), little guidance existed to critique AI generative results or improve prompts. In 2023, I developed the FLUF Test, based on many years as a teacher of information and digital media literacy skills across PK20 environments. With the FLUF(F) test, you are looking specifically at format, language, usability and fanfare and the indicators for each. You must also consider the degree of expertise the reviewer has about the topic and his or her ability to judge the GenAI output. The goal is to have a result that has zero FLUF, or zero infractions in the generated results. The FLUF test uses a simple rubric of plus (+) or minus (-), which translates to a zero and one. Each issue or infraction found with an AI generative result is assessed a “plus” or receives a scores of one. The total infractions are tallied to get the FLUF score. When a score results, the user is encouraged to re-prompt, regenerate, and repeat until the AI generative result scores a zero. The template guides your journey as you use the FLUF indicators to guide prompt writing, critically evaluating results, and repeating the process for zero FLUF. Explore the framework (pages 1-12) or check out the sample scenario of the FLUF Test in Action (pages 13-21). Watch this video to learn how it works. |
Underpinnings: AI & Information Literacy
The FLUF test is presented to encourage users to expand their AI and information literacy, and extend their use of the human component in their GenAI interactions. From better prompts to critical evaluation of results. For an overview of the rationale leading to the FLUF Test framework and a detailed walk through of the FLUF experience, view the presentation.
Explore the slide deck to better understand:
Explore the slide deck to better understand:
- GenAI Models
- GenAI Output Types (LLM, Images, Multimedia, etc.)
- Using FLUF(F) Test to create better Input (Prompts)
- FLUF(F) Testing GenAI Output (Critical Evaluation)
- Using the Checklist with FLUF(F) Test infractions
- Scoring the FLUF(F) Test rubric
Current Research:
FLUF Test in Action: Examples of the FLUF Test being used in PK20 courses
Alexander, K., Felids, C., Egan, C., & Parker, J. (Fall 2025). Application of a critical evaluation framework (The FLUF Test) for AI-generated outputs in a pharmacy drug information assignment: a pilot study. Journal of Applied Instructional Design.
Parker, J. & Hicks, T. (April 24, 2025). Enhancing Confidence in Using the ISTE Educator Citizen Standard with Artificial Intelligence: The Impact of the FLUF Test Among K12 Teachers in an ISTE Recognized Master’s Level Learning, Design, and Technology Program. [Presentation of Research]. Research in Teaching and Learning Conference, University of Florida.m [Alternative title: Implementation of the FLUF framework in a Masters of Learning, Design, and Technology program for K-12 teachers to Assess their Experiences with GenAI]
Here are some of the foundational pieces for this work:
The 80/20 Rule (Pareto Principle, 1896)
The balance between human insight and technological capability
80% research and regeneration of online sources
20% human critique, creativity, and culmination to create a final output
Information Literacy: Frameworks for Critical Evaluation of Online Resources
CRAAP (Blakeslee, 2004) – currency, relevance, authority, accuracy, purpose
CARRDSS (Valenza, 2004) – credibility, accuracy, reliability, relevance, date, sources, scope
SIFT (Caulfield, 2019) – stop, investigate, find, trace
5 Key Questions (Thorman & Jolls, 2003) – creator, techniques, perceptions, bias, purpose
ISTE Standards for Educators (2017): Citizen Standard
When considering AI policies, mentoring students in ethical and appropriate use of digital resources falls under the guidelines of digital citizenship. According to the ISTE Educator Standard for “Citizen”, “Educators inspire students to positively contribute to and responsibly participate in the digital world.” (International Society for Technology in Education [ISTE], 2017).
2.2a. Create experiences for learners to make positive, socially responsible contributions and exhibit empathetic behavior online that build relationships and community.
Indicator 2.2a provides guidance on how students should be interacting and socially responsible in their use of AI
2.3b. Establish a learning culture that promotes curiosity and critical examination of online resources, and fosters digital literacy and media fluency.
In 2.3b, we see the essence of critical evaluation. Traditionally, information literacy skills included critically evaluating online sources using protocols like SIFT (Caulfield, 2019); CARRDSS (Valenza, 2004); CRAAP (Blakeslee, 2004); and 5 Key Questions (Thoman & Jolls, 2003). We extend this to include FLUF (Parker, 2023) for critical evaluation of AI.
2.3c. Mentor students in safe, legal and ethical practices with digital tools and the protection of intellectual rights and property.
Indicator 2.3c directs educators on mentoring of students on ethical practices while using digital tools like AI, and reminds us to cite our use of AI as well as adhere to copyright, intellectual property, and fair use guidelines.
2.3d. Model and promote management of personal data and digital identity, and protect student data privacy.
Finally, in 2.3d the emphasis is on data privacy, digital identity, and personal data. This indicator causes users to pause when uploading research, sensitive information, or personal data into an AI generative too
Critical Evaluation of AI: FLUF Test (Parker, 2023)
The FLUF test is presented to encourage users to expand their media and information literacy, and extend their use of critical evaluation of all information - no matter how it is obtained. For an overview of the rationale leading to the FLUF Test framework and a detailed walk through of the FLUF experience, view the presentation. For more information about the FLUF Test, templates, or a consultation, contact me.
The FLUF Test Prompt Template and Critical Evaluation Rubric © 2023 by Dr. Jennifer Parker is licensed under CC BY-NC-SA 4.0
FLUF Test in Action: Examples of the FLUF Test being used in PK20 courses
Alexander, K., Felids, C., Egan, C., & Parker, J. (Fall 2025). Application of a critical evaluation framework (The FLUF Test) for AI-generated outputs in a pharmacy drug information assignment: a pilot study. Journal of Applied Instructional Design.
Parker, J. & Hicks, T. (April 24, 2025). Enhancing Confidence in Using the ISTE Educator Citizen Standard with Artificial Intelligence: The Impact of the FLUF Test Among K12 Teachers in an ISTE Recognized Master’s Level Learning, Design, and Technology Program. [Presentation of Research]. Research in Teaching and Learning Conference, University of Florida.m [Alternative title: Implementation of the FLUF framework in a Masters of Learning, Design, and Technology program for K-12 teachers to Assess their Experiences with GenAI]
Here are some of the foundational pieces for this work:
The 80/20 Rule (Pareto Principle, 1896)
The balance between human insight and technological capability
80% research and regeneration of online sources
20% human critique, creativity, and culmination to create a final output
Information Literacy: Frameworks for Critical Evaluation of Online Resources
CRAAP (Blakeslee, 2004) – currency, relevance, authority, accuracy, purpose
CARRDSS (Valenza, 2004) – credibility, accuracy, reliability, relevance, date, sources, scope
SIFT (Caulfield, 2019) – stop, investigate, find, trace
5 Key Questions (Thorman & Jolls, 2003) – creator, techniques, perceptions, bias, purpose
ISTE Standards for Educators (2017): Citizen Standard
When considering AI policies, mentoring students in ethical and appropriate use of digital resources falls under the guidelines of digital citizenship. According to the ISTE Educator Standard for “Citizen”, “Educators inspire students to positively contribute to and responsibly participate in the digital world.” (International Society for Technology in Education [ISTE], 2017).
2.2a. Create experiences for learners to make positive, socially responsible contributions and exhibit empathetic behavior online that build relationships and community.
Indicator 2.2a provides guidance on how students should be interacting and socially responsible in their use of AI
2.3b. Establish a learning culture that promotes curiosity and critical examination of online resources, and fosters digital literacy and media fluency.
In 2.3b, we see the essence of critical evaluation. Traditionally, information literacy skills included critically evaluating online sources using protocols like SIFT (Caulfield, 2019); CARRDSS (Valenza, 2004); CRAAP (Blakeslee, 2004); and 5 Key Questions (Thoman & Jolls, 2003). We extend this to include FLUF (Parker, 2023) for critical evaluation of AI.
2.3c. Mentor students in safe, legal and ethical practices with digital tools and the protection of intellectual rights and property.
Indicator 2.3c directs educators on mentoring of students on ethical practices while using digital tools like AI, and reminds us to cite our use of AI as well as adhere to copyright, intellectual property, and fair use guidelines.
2.3d. Model and promote management of personal data and digital identity, and protect student data privacy.
Finally, in 2.3d the emphasis is on data privacy, digital identity, and personal data. This indicator causes users to pause when uploading research, sensitive information, or personal data into an AI generative too
Critical Evaluation of AI: FLUF Test (Parker, 2023)
The FLUF test is presented to encourage users to expand their media and information literacy, and extend their use of critical evaluation of all information - no matter how it is obtained. For an overview of the rationale leading to the FLUF Test framework and a detailed walk through of the FLUF experience, view the presentation. For more information about the FLUF Test, templates, or a consultation, contact me.
The FLUF Test Prompt Template and Critical Evaluation Rubric © 2023 by Dr. Jennifer Parker is licensed under CC BY-NC-SA 4.0