Please note:
The guidance document admits that we do not have all the answers; it provides links to research and articles about the topic. It will be a webpage that is routinely updated by the DTLC.
Term | Definition |
---|---|
Algorithm | the set of logical rules used to organize and act on a body of data to solve a problem or to accomplish a goal that is usually carried out by a machine. An algorithm is typically modeled, trained on a body of data, and then adjusted as the results are examined. Because algorithms are generally processed by computers and follow logical instructions, people often think of them as neutral or value-free, but the decisions made by humans as they design and tweak an algorithm and the data on which an algorithm is trained can introduce human biases that can be compounded at scale. Humans who interact with an algorithm may also find ways to influence the outcomes, as when a marketer finds ways to push a website up in the results of a search through search engine optimization (SEO). |
Algorithmic justice | the application of principles of social justice and applied ethics to the design, deployment, regulation, and ongoing use of algorithmic systems so that the potential for harm is reduced. Algorithmic justice promotes awareness and sensitivity among coders and the general public about how data collection practices, machine learning, AI, and algorithms may encode and exacerbate inequality and discrimination. |
Algorithmic literacy | a subset of information literacy, algorithmic literacy is a critical awareness of what algorithms are, how they interact with human behavioral data in information systems, and an understanding of the social and ethical issues related to their use. |
Artificial intelligence (AI) | a branch of computer science that develops ways for computers to simulate human-like intelligent behavior, able to interpret and absorb new information for improved problem-solving and recognize patterns. Examples include training robots, speech recognition, facial recognition, and identifying objects such as traffic signs, trees, and human beings necessary for self-driving cars. AI relies on machine learning capabilities and training data. Humans are involved in creating or collecting sets of training data (e.g., employing low-wage workers abroad to identify objects on computer screens to provide data for autonomous vehicle navigation). Bias may be built into machine learning (e.g., by using criminal justice data sets for risk assessment in predictive policing). Machines can be trained to learn from experience but common sense and recognizing context are difficult, thus limiting the ability of computer programs to perform tasks such as distinguishing hate speech from colloquial humor or sarcasm. |
Artificial Intelligence as a Service (AIaaS) | Cloud-based AI services providing higher education institutions with access to AI tools, algorithms, and infrastructure, facilitating the development of AI-driven applications and research projects without significant upfront investments. |
Artificial Intelligence Augmentation (AI Augmentation) | The integration of AI technologies to enhance human capabilities in higher education, empowering educators and researchers with AI-driven tools for personalized learning, data analysis, and administrative decision-making. |
Artificial Intelligence Bias Mitigation (AI Bias Mitigation | Strategies and policies for identifying, mitigating, and preventing biases in AI systems, critical in higher education for ensuring fairness, equity, and diversity in student assessment, admissions, and educational opportunities. |
Artificial Intelligence Chipsets (AI Chipsets) | Specialized hardware accelerating AI computations, utilized in higher education for research in AI algorithms, training large-scale models, and deploying AI applications with improved performance and energy efficiency. |
Artificial Intelligence Ethics (AI Ethics) | The development and deployment of AI systems in alignment with ethical principles and societal values, crucial in higher education for ensuring fairness, equity, and accountability in student assessment, admissions, and decision-making processes. |
Artificial Intelligence Explainability (AI Explainability) | Techniques ensuring transparency and interpretability of AI models, vital in higher education for explaining grading decisions, student feedback, and adaptive learning recommendations to students, instructors, and stakeholders. |
Artificial Intelligence Governance (AI Governance) | Policies and regulations governing the development, deployment, and use of AI technologies in higher education, ensuring ethical and responsible AI practices, data security, and compliance with legal requirements. |
Artificial Intelligence Safety (AI Safety) | Concerns and measures addressing potential risks and harms associated with AI technologies, guiding higher education institutions in the responsible development and deployment of AI systems to ensure student and staff well-being, data security, and regulatory compliance. |
Attention economy | since our attention is a limited resource and every person only has so much of it, companies (both platforms and people who use the platforms to sell, entertain, or persuade) try to engage and keep people’s attention. This rewards clickbait and influences the design of algorithms and platforms to maximize time spent online. |
Bias in AI | Systematic favoritism or prejudice in AI systems, posing challenges in higher education such as biased admissions algorithms and unfair grading systems, necessitating policies for bias detection, mitigation, and transparency. |
Big Data | a set of technological capabilities developed in recent years which, when used in combination, allows for the continuous gathering and processing of large volumes of fine-grained and exhaustive data drawn from multiple sources to be combined and analyzed continuously. |
Computer Vision | An AI discipline enabling computers to interpret and analyze visual information, utilized in higher education for tasks such as facial recognition for campus security, content accessibility, and augmented reality applications. |
Data exhaust | information incidentally generated as people use computers, carry cell phones, or have their behavior captured through surveillance which becomes valuable when acquired, combined, and analyzed in great detail at high velocity. |
Deep Learning | A branch of machine learning involving neural networks with multiple layers, used in higher education for tasks such as personalized learning, predictive analytics, and natural language processing. |
Edge AI | The deployment of AI algorithms on edge devices, enabling real-time processing and inference in higher education applications such as IoT-based campus management, personalized learning tools, and mobile educational apps. |
Edge Computing | Decentralized processing of data near the source of generation, beneficial in higher education for low-latency AI applications, real-time analytics in remote locations, and efficient utilization of computing resources. |
Ethical AI | The development and deployment of AI systems in alignment with ethical principles and societal values, crucial in higher education for ensuring fairness, equity, and accountability in student assessment, admissions, and decision-making processes. |
Explainable AI (XAI) | Techniques and methods ensuring transparency and interpretability of AI models and decisions, essential in higher education for maintaining trust, accountability, and regulatory compliance in academic and administrative AI systems. |
Generative Adversarial Networks (GANs) | AI frameworks where two neural networks compete to generate realistic data, utilized in higher education for creating synthetic datasets, generating educational content, and improving data privacy. |
Hyperparameters | Parameters defining the configuration and behavior of AI models, requiring optimization and tuning in higher education applications for achieving optimal performance, reliability, and scalability. |
Machine Learning (ML) | A subset of AI focusing on algorithms and techniques that enable computers to learn from data and improve their performance over time without being explicitly programmed. Also, the use of algorithms, data sets, and statistical modeling to build models that can recognize patterns to make predictions and interpret new data. The purpose of machine learning is to enable computers to automate analytical model-building so computers can learn from data with little human intervention. |
Model Interpretability | The ability to explain and understand AI models and their decisions, essential in higher education for transparent student assessment, research reproducibility, and accountability in automated decision-making systems. |
Model Robustness | The capability of AI models to maintain high performance and reliability under varying conditions and inputs, critical in higher education for ensuring accurate student assessment, research findings, and administrative decision-making. |
Natural Language Processing (NLP) | The field of AI is concerned with enabling computers to understand, interpret, and generate human language, utilized in higher education for automated grading, language learning support, and virtual assistants. |
Neural Network | A computational model inspired by the human brain's structure, employed in higher education for various applications including student performance prediction, adaptive learning systems, and data analysis. |
Personalization | the process of displaying search results or modifying the behavior of an online platform to match an individual's expressed or presumed preferences, established through creating digital profiles and using that data to predict whether and how an individual will act on algorithmically selected information. This process drives targeted digital advertising and has been blamed for exacerbating information silos, contributing to political polarization and the flow of disinformation. Ironically, to consider information “personal” implies it is private, but personalization systematically strips its targets of privacy. |
Platform | an ambiguous term that means both software used on personal computers and software deployed online to provide a service, such as web search, video sharing, shopping, or social interaction. Often these systems use proprietary algorithms to mediate the flow of information while enabling third parties to develop apps, advertising, and content, thus becoming digital spaces for the individual performance of identity online, data-driven persuasion (commercial as well as political), and group formation through social interaction. In this report, we use the term to refer to “internet giants” such as Google, YouTube, Instagram, and Facebook and others mentioned by students in our focus group sessions. |
Reinforcement Learning | An AI paradigm where algorithms learn by interacting with an environment and receiving feedback, applicable in higher education for adaptive learning environments and personalized feedback systems. |
Semi-Supervised Learning | A combination of supervised and unsupervised learning techniques, employed in higher education for tasks such as student performance prediction with limited labeled data and large-scale data analysis. |
Supervised Learning | A machine learning approach where models are trained on labeled data, used in higher education for predictive modeling, recommendation systems, and intelligent tutoring systems. |
Transfer Learning | A machine learning technique where models trained on one task are adapted to perform related tasks, valuable in higher education for leveraging pre-trained models in educational content creation, student support systems, and academic research |
Unsupervised Learning | A machine learning approach where models uncover patterns and structures from unlabeled data, relevant in higher education for clustering similar student cohorts, curriculum optimization, and anomaly detection. |