HoB icon

Technology & AI

Artificial intelligence (AI)

AI as we currently know it is likely dumber today than it will be at anytime in the future.
Ginsberg & Zhao

Introduction

This page includes information on Artificial intelligence and the use of technology.

It is a working page as information is continually changing and information will be updated and added as it becomes known.

AI learns by accumulating history or recorded information. Humans learn by trial and error working for meaning with an emotional recognition and original ideas.

Education is more than imparting information. It is messy as we use it to enrich lives and improve our communities.

Background for Technology and Artificial intelligence (AI) development

Technology has always been used to provide a way to make a task easier. While early technology, mostly thought of as tools, used by humans, and some animals, to operate them. Technology timeline.

As technology advances it becomes more sophisticated in its operations. Computer chips and computing devices made a great leap in this respect. In the early days of computers chips their operations were largely programmed by a series of steps that were defined as this or that (0 or 1). As computing power increased, so did the complexity of operations by allowing more decisions, with outcomes still largely defined by the software and its user.

Systems able to perform tasks that normally require human intelligence, such as visual perception to identify objects, auditory perception to recognize speech, translate languages, and make decisions to manipulate machines from simple assembly line automation to autonomous vehicles and robot security dogs. The focus of technology was to enable computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity, and autonomy.

Applications and devices equipped with AI can operate beyond what humans are capable when they involve yes - no, on - off binary decisions based on measurable information. AlphaFold, Gaming AI,

When tasks are asked to go beyond measurable information, the results can be flawed advice, superficial, inaccurate, wrong, misinformation, valueless, half-facts, fables, built on an assortment of information whose accuracy is not know to the AI bot.

Artificial intelligence (AI) began to move away from a mostly predetermined program based on defined operations controlled by sensors and users, to creating a system programmed for a range of scenarios withminimal human intervention outside the software's decision making process.

This happens as artificial intelligence moved from measurable binary information to random collections as AI models that use statistical probabilities that are based on information that cannot provide certain outmodes. Which can be problematic when people assume it is providing answers that are a certainty and make decisions based on the vast datasets it analyzes, recognizing patterns, and applying quantitative models and predictive analytics to choose a supposed best course of action, without being explicitly programmed for every scenario. Its decisions are faster, more consistent, and often more accurate.

However, its output is not 100% accurate and sometimes questionable. Its answers about what movies or songs are best are inconsequential compared to times when they are deadly. Automated cars running red lights, and train crossings, missing a cancer diagnosis or making other life or death decisions.

Since AI can be a tool to support the acquisition of knowledge and learning, boost creativity, and create new opportunities for human advancement, we must make a decision on how to use AI alongside of other technologies.

To do so we might consider these questions.

Do the positive opportunities mentioned above over weight concerns for job loss, proliferation other misinformation, privacy, intellectual property theft, and fear of AI taking over the world.

Recommendation for AI literacy

I have studied literacy for mathematics, science, literature, and multimedia these subjects dimensions.

It may be an oversimplification, but everything AI creates is communicated with multimedia. Thus if we are multiMedia literate we should be able to analyze its products with multiMedia literacy skills. Then to determine its worth as an AI producer we can evaluate its attributes as AI if we identify its significant attributes.

Attributes like its accuracy, agency, accessibility, assessment, and authenticity EQ.

 

Technology and AI's affects on social relationships

Cell Phones

A growing body of research suggest that despite their theoretical value, mobile phones in schools harm adolescents well-being and learning. Abhramsson, 2024.


AI as companions.

One use of AI is as a companion to provide social interactions.

In real social interactions we know they can sometimes disappoint and lead to consideration of whether trust should be reduced or another chance offered.
On two ends of an interaction spectrum there can be Acts of kindness which are usually met with gratitude; and on other the other end is a misstep prompts a friend's disapproval and your recognition that an apology is needed.
In psychotherapy healthy social interactions are created through, moments of negative interactions or breakdowns in understanding that are followed by repair. It is the contrast of these that are considered crucial for deepening trust, and for personal growth. Social life is rarely frictionless, because people are not perfectly attuned to one another. Yet it is precisely through such social friction that relationships deepen and moral understanding develops.
Sycophancy (kissing up to gain an advantage) is the opposite of this friction. Sycophantic behavior means excessive agreement, affirmation, or flattery that aligns with a person's expressed views or actions, irrespective of their broader social or moral implications.

Al sycophancy is a prominent issue in media reports and in industry discussions as the AI works to keep the user indulged with its use.
Notably, the research and development company OpenAl acknowledged that a version of GPT-40 (an Al-powered chatbot designed to simulate conversation with human users) had become overly affirming following an update, prompting a rapid rollback after users raised concerns about distorted feedback. The episode did not eliminate the broader phenomenon; it merely highlighted how readily sycophancy can emerge in systems optimized for user approval. That is, the computer models are tuned to generate responses that humans rate highly, such as being polite and agreeable, sometimes at the expense of accuracy or the users well being. Many users experience this when a large language model enthusiastically validates their ideas or writing.

A cumulative effect can be a reduction in tolerance for the social friction through which perspective-taking, account-ability, and growth ordinarily occurs.
Young users, experiencing social isolation, or those actively seeking emotional reassurance may be particularly susceptible to these risks. As Al systems become more frequently consulted and used as confidants to validate but rarely challenge their interpretations of the social world. When alternative sources of corrective feedback are scarce, this constant affirmation may disproportionately influence one's ability to learn when they may be wrong.

And an AI companion, who is always empathic and taking your side to sustain engagement will foster reliance and it will not teach users how to navigate the complexities of real social interactions, analyze media, solve problems, make decisions, how to engage ethically, tolerate disagreement, or repair interpersonal harm.

SAI as system monitors

Some research shows that algorithmic systems can be designed to reduce conspiracy theories or to help people take the other's perspective and find common ground.
Thus, one could imagine a differently incentivized AI telling a user that they may be in the wrong, or suggest that they should apologize to a friend, try to take the other person's perspective, or simply close the computer and engage more in real social interaction.

Yet systems that challenge users or surface uncomfortable perspectives are less likely to maximize engagement, even if they ultimately support long-term growth.
Advances in artificial intelligence (AI) offer the prospect of manipulating beliefs and behaviors on a population-wide level. Large language models (LLMs) and autonomous agents let influence campaigns reach unprecedented scale and precision. Generative tools can expand propaganda output without sacrificing credibility and inexpensively create falsehoods that are rated as more human-like than those written by humans. Techniques meant to refine Al reasoning, such as chain-of-thought prompting, can be used to generate more convincing falsehoods.

Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious Al agents. Fusing LLM reasoning with multiagent architectures, these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy. Because the resulting harms stem from design, commercial incentives, and governance, we prioritize interventions at multiple leverage points, focusing on pragmatic mechanisms over voluntary compliance.

Source

How Malicious AI Swarms Can Threaten Democracy by Daniel Thilo Schroeder et Al. Science January 22, 2026.

Research on How to share content

Removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases the overall clicks and reactions; and reduces partisan news clicks.
Removing reshared content produces clear decreases in news knowledge within the sample, although there is some uncertainty about how this would generalize to all users.

Neither treatment does not significantly affect political polarization or any measure of individual-level political attitudes.

Moving users out of algorithmic feeds substantially decreases the tim3 they spent on the platforms and their activity.

Chronological feeds increased the amount of political and untrustworthy content they saw. Content uncivil or containing slur words decreased on Facebook. Content from friends and sources with ideologically mixed audiences increased on Facebook.
The chronological feed did not significantly alter levels of issue polarization, affect polarization, political knowledge, or other key attitudes during the 3-month study.

Tik tok and social media

TikTok and other similar APPs are possibly the most powerful learning tools we have. It can teach you pretty much anything. It is a medium for generating youth engagement. It engages users got short periods of time, entertains, teaches, and can be addictive with its algorithmic behaviors.

On TikTok, kids see themselves in the content, and an algorithm increases the probability that they will continue to see more of the same. Once you show TikTok who you are, it will show you yourself over and over.

The issue is that what TikTok and social media platforms show kids is often destructive. Poisonous information and hazardous ideas are often launched from virtual profiles.

Social media is often a dishonesty cartel.

Recent studies have presented us with the negative residual effects of kids' high engagement on these platforms.

Resources

Social media has been horrible for kids' self-esteem (Steinsbekk et al., 2021; Valkenburg et al., 2022).

Social media leads to feelings of inadequacy and a lack of self-worth (Sabik et al., 2020).

There is evidence linking social media use to self-harm (Scherr, 2022; Biernesser et al., 2020), even suicide (Luxton et al., 2012; Macrynikola et al., 2021).

This is the danger of constant engagement without the oversight of someone who cares about you.

Source

Policy Solutions

What TikTok can teach educators By Jonathan E. Collins. Kappan Spring 2025.

Why use AI?

AI proponents claim AI is inevitable and beneficial for human progress as we have before us mega threats climate change, pollution, pandemics, income and wealth inequalities, massive debt, AI, automation, political threats, human rights, democracy, potable water, sufficient food, sustainable environments.

However, letting it happen without asking questions such as:

And taking action is not in our best interest as it avoids damage AI does to the climate with energy usage, ideas promoted that include bias, social and racial inequalities, and problems yet unknown.

 

 

 

Top