First, thank you to Ash for highlighting our open access paper on the potentials and pitfalls of Artificial Intelligence Powered Large Language Models (AI-LLMs) in Health and Physical Education Teacher Education. We appreciate the capacity for the PEPRN Blog to help spread the word about this important topic. We would also like to thank the editors at the Journal of Teaching in Physical Education for supporting this project and publishing it as open access.
The paper was prompted by a related conversation several of us authors had on the Playing with Research in Health and Physical Education podcast (episode link). Our intention for writing it was (is) to provoke broader conversation regarding the emergence of these powerful new technologies, particularly as they relate to our work as teacher educators and researchers. As we tried to highlight in our discussion section related to ethical and appropriate use, leveraging AI-LLMs is not a neutral act. Therefore, as responsible professionals, we think we, as a field, need to enhance our ability to articulate a path forward regarding if (that is a big if) and how we might be able to integrate AI-LLMs to support our teaching and research. Since these are novel and potentially complicated decisions, we made a point to emphasize that it is best we work together by engaging in constructive, collective discourse. One way we have tried pushing the conversation forward was through a symposium at the AIESEP conference in Chile. Several authors of our JTPE article led a discussion about how these models can help and potentially hurt what we do. If interested, readers can view the intro video that we made for the symposium about different AI-LLMs you can use with research and teaching here.
We have also discussed a number of different options that involve bringing our community together to learn and chart a reasonable vision of AI-LLM use for our field. One idea is to establish a task force to develop a Position Statement on Ethical and Appropriate Use of AI. Another is organizing an in-person and/or virtual summit related to the topic. In any case, work like this is best done with a wide range of stakeholders involved and would likely benefit from an organizational sponsor like AIESEP, AERA, or a similarly inclusive organization.
Since the publication of our article, several academic journals and publishers have released statements regarding the use of AI and AI-assisted technologies in scientific writing (e.g. The Lancet, Taylor & Francis). These types of author guidelines are important and should certainly be developed for our own discipline specific outlets. But they are just a first step. As an applied field, our work as PETE/HETE professionals is multidimensional and expands beyond exclusively publishing manuscripts. We need to consider the impact of these technologies (for better and/or worse) across the entire PETE/HETE work ecology, which our paper was only able to touch on due to page limitations. For example,
• What about our undergraduate students? What impact could habitual AI-LLM use have on their critical thinking, content knowledge acquisition and their ability to teach effectively?
• How might general interaction with AI-LLMs impact our ability, our students’ ability, and their students’ ability to flourish as human beings?
• What about families or students who choose to avoid interacting with AI for philosophical and/or religious reasons? What empathy and accommodations will be necessary to account for these students who want to interact with assignments organically, without the efficiency, speed, or influence of AIs?
• What about our field? What happens when an AI-driven online curriculums initiate and document better student outcomes than a human physical education teacher?
• What is the role of the HETE/PETE scholar (expert) in an era where the perception seems to be that chatbots already know everything?
• Why spend time writing a blog or conducting a podcast when an AI-LLM chatbot can do just as good or better a job?
The last bullet point is of course a bit tongue in cheek. We are both fans of the PEPRN blog and often assign it as a supplemental resource for our PETE students. But, in the interest of pushing the conversation on AI-LLMs forward, we would like to demonstrate the power of a newer AI-LLM program that many may not have heard of (it is highlighted in the YouTube video from the AIESEP Symposium linked above). The program is called SciSpace and you use it for free at https://typeset.io/ Once you upload a PDF into the platform, you can ask the AI bot any question related to the paper.
So, we did that with our paper. Below is the result of around 6-minutes of interaction with the program combined with some copy and pasting into our document. We did not edit the answers. Curious readers should compare the SciSpace output below, to the original PEPRN blog article summary.
Our Prompt: Explain the abstract of this paper in 2 lines:
SciSpace Response:
The abstract of this paper was generated by an AI-driven chatbot called ChatGPT, which used text from the article to produce a 150-word summary. The paper discusses the possibilities and challenges associated with the emergence of AI-powered LLMs in academic workspaces.
Can you summarize the editor’s note?
The availability of ChatGPT, an AI-powered chatbot with human-like responses, has sparked interest in the educational community, prompting the need for guidelines or recommendations for its effective and appropriate use.
Can you summarize the introduction?
• The highlighted text discusses the use of ChatGPT, an AI-driven chatbot, to generate abstracts and its implications for the future of scholarship.
• ChatGPT is an AI-powered large language model developed by OpenAI that can generate human-like text.
• The release of ChatGPT has sparked discussions about its potential impact on education and research.
• The text emphasizes the need for consensus guidelines on the ethical use of AI-powered chatbots in academia.
• The authors of the research note are faculty members in physical education teacher education who are aware of the effects of AI-powered language models in their work.
• Ignoring or being indifferent to the capabilities of AI-powered language models can lead to poor decision-making.
• Faculty members in health and physical education teacher education are encouraged to learn about and leverage AI-powered language models in their academic work.
• It is important to develop skills and competencies to adapt to the evolving technology landscape.
• The call is not to blindly adopt new tools and technologies but to recognize the potential implications and work together to develop ethical guidelines.
• Field- and discipline-specific consensus statements are suggested to guide the ethical and appropriate use of AI-powered language models.
Can you summarize the conclusion?
• Locke's caution about the potential negative consequences of technological advancements applies to AI-powered LLMs in H-PETE.
• There are currently more questions than answers regarding the appropriate and ethical use of AI-powered LLMs in learning, teaching, and research spaces.
• Consideration is needed for how teachers and scholars are trained to approach technology advancements and AI-driven tools like ChatGPT.
• The discourse around AI-driven LLMs should consider the consequences, power, and potential impact on humanity, as well as ensuring integrity, protecting vulnerable populations, and promoting equitable access.
• Scholars in health, physical education, and kinesiology are called upon to address the proliferation of AI-powered LLMs and consider the need for collective consensus or guidelines.
• The voice and input of all stakeholders in HPE/H-PETE are valuable in understanding AI-powered LLMs in teaching, learning, and research spaces.
• Consensus building and further discussions through open forums, conversations, papers, and reports are necessary to shape a clearer understanding of AI-powered LLMs in HPE/H-PETE.
What is the main point of the paper?
The main point of the paper is to discuss the possibilities and challenges associated with the emergence of AI-driven LLMs in academic workspaces, particularly in the field of health and physical education teacher education (H-PETE). The authors suggest that ignoring or being indifferent to the potential impacts of these technologies can lead to "dumb things" and encourage H-PETE faculty members to learn about the capabilities, functions, and potential applications of AI-powered LLM tools. The authors also highlight the importance of maintaining sociocritical perspectives when considering the use of digital technologies and emphasize the need for field-specific consensus statements to guide ethical and appropriate use of AI-powered LLMs.
What are the main arguments of the paper?
• The emergence of AI-driven LLMs can potentially reposition who or what holds a position of authority on a topic and who or what are considered key stakeholders in brokering new/alternative knowledges.
• Instructors could harness this AI function to augment instruction in ways that enhance or update their breadth of knowledge.
• Ignoring, avoiding, or being indifferent to predictions about what technology like AI-powered LLMs can do can also cause academics to do really dumb things.
• It is important to learn about the capabilities, functions, and potential applications of AI-powered LLM tools and to actively develop and adapt skills and competencies to address the ever-evolving technology landscape with appropriate wisdom.
• Scholars should recognize the potential implications AI-powered LLMs bring to their work and consider working together to develop field- and discipline-specific consensus statements to guide ethical and appropriate use.
Summary of the paper in one or two sentences?
The paper discusses the possibilities and challenges associated with the emergence of AI-driven LLMs in academic workspaces, particularly in the field of health and physical education teacher education (H-PETE). The authors suggest that ignoring or being indifferent to the potential impacts of these technologies can lead to "dumb things" and encourage H-PETE faculty members to learn about the capabilities, functions, and potential applications of AI-powered LLM tools.
(Chad and Risto are back writing again now) There are obvious benefits to using these types of programs. One benefit is that it is incredibly efficient. However, the massive pitfall is the fact that we spent 6-minutes copying and pasting and reading bullet points output by an AI-LLM instead of genuinely interacting with the paper. We didn’t have to read the paper, or even skim it, to get these answers.
So, let’s think of an undergraduate student that is asked to do a project like this (e.g., summarize the main arguments of the paper, and provide summaries of specific sections). Using AI-LLM, a student can now perform the task in 6-minutes instead of the hour it would have taken before…it sure does help speed up the assignment, but does the student/scholar learn? Although many can use an AI-LLM to summarize an article like Ash does, even with zero scholarly experience in the field of HPE, it doesn’t mean that they can conceptualize the arguments made in the same way based on previous knowledge. AI-LLMs have a tremendous capacity to help us do our jobs better, but without previous knowledge or an understanding of how this one article sits within the greater field of HPE, it is just an exercise of copy and pasting content without much extra meaning or enrichment. So, we should definitely not stop reading the research! Maybe these programs can just help us navigate the literature better.
There are many more implications and decisions to be made regarding AI-LLMs. Some are broad and need community action to help chart initial steps forward, which we hope can/will happen in the near future. Other implications are more personal and need deep individual reflection. Like, why bother [ ], when an AI chatbot can do better? When perhaps we should be asking, why let an AI chatbot [ ], when we can gain satisfaction and enrichment from doing it ourselves?