
HUMAN FIRST
Our mission is to protect and enhance human intelligence in the age of AI.

Is AI Making Us Stupid?
Artificial intelligence is already benefiting science, medicine, information technology, marketing and more.
But there are hazards. For example, misinformation, economic havoc, and environmental damage.
Another threat may prove the most important. Research has shown that AI can degrade human cognition*. Users are putting at risk their critical thinking, reasoning abilities and creativity.
Protect the Mental Wilderness!
We are not “anti-AI”. This new technology is here to stay — and grow. We are pro-human. Our project seeks to equip people to live positively in a new mental environment.
There are two main thrusts:
-
Empower reflective AI usage
-
Teach relevant cognitive skils
In other words, we want to help people use AI more mindfully and critically. And we want to bolster natural abilities that have become more important — not less! — in the age of AI.
Four Fields of Action
The Human Intelligence Trust will fulfill its mission in four ways:
1. TOOLS
We are building tools to help people use AI thoughtfully, safely and ethically.
2. EDUCATION
We are designing online education programs focused on critical thinking and higher-level reasoning.
3. RESEARCH
Our research focus is the use of AI itself to enrich human thinking, a largely unexplored area.
4. COMMUNITY
We seek to build a community of “citizen thinkers” aligned with our mission to protect and enhance human intelligence.
PRETHINK: Our Starting Point
Our first initiative is a software application, PRETHINK. It is based on a strategy we call the Reflective User Interface (RUI).
PRETHINK is a portal for accessing AI that alerts the user to clarify their prompting and then critically evaluate the AI output.
PRETHINK is currently in beta testing mode.
Future Initiatives
TOOLS
-
Reflective User Interfaces: We will expand on the concept underlying PRETHINK to build new tools calibrated for specific domains and use-cases.
-
Inverse Prompting: We are building tools that transform the usual human-AI relationship. With inverse prompting, instead of you pulling ideas from AI, AI pulls ideas from you.
-
Human-AI Collaboration: AI is best used as a collaborator, rather than as a mental prosthesis. We are designing tools to facilitate human-AI collaboration.
EDUCATION
-
Newsletter: We will soon launch THINK HUMAN, a newsletter that curates the latest research and conversations about human cognition and AI usage.
-
Basic Training: We are finalizing MIND EXPANDER, an initial online video course designed to increase “cognitive mobility” — in other words, to help people think further and faster in any domain.
-
Curricula Design: Later we plan to design formal curricula for schools, colleges and enterprises, focused on the mindful use of artificial intelligence.
RESEARCH
-
Cognitive Gain: Our objective is to research the cognitive gains that can be achieved by strategies such as the Reflective User Interface and Inverse Prompting.
-
Partnerships: To achieve our research goals, we will seek partnerships with academic institutions focused on the human implications of new technologies.
COMMUNITY
-
Citizen Thinkers: We will initially build an online community of individuals from all walks of life who share our mission to protect and enhance human intelligence.
-
Conferences and Events: Once our community has sufficiently developed, we will host conferences and events with participating experts in AI, cognition, psychology and education.
Your Are Invited!
The Human Intelligence Trust is an initiative launched by a handful of concerned individuals. We would love to build a community that shares our core objectives.
If you care about this issue, it's likely you can add more than we can imagine to the project's future development. We warmly welcome your participation. Please contact us here, and we'll be in touch:
NOTE: Your privacy matters to us. We'll never sell your data. We may send you occasional emails, but you can always unsubscribe :)
*References
Gerlich, Michael. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies, vol. 15, no. 1, Jan. 2025, p. 6, https://doi.org/10.3390/soc15010006.
Fan, Yizhou, et al. “Beware of Metacognitive Laziness: Effects of Generative Artificial Intelligence on Learning Motivation, Processes, and Performance.” British Journal of Educational Technology, vol. 56, no. 2, 2025, pp. 489–530, https://doi.org/10.1111/bjet.13544.
Dergaa, Ismail, et al. “From Tools to Threats: A Reflection on the Impact of Artificial-Intelligence Chatbots on Cognitive Health.” Frontiers in Psychology, vol. 15, Apr. 2024, p. 1259845, https://doi.org/10.3389/fpsyg.2024.1259845.
Jose, Binny, et al. Frontiers | The Cognitive Paradox of AI in Education: Between Enhancement and Erosion. https://doi.org/10.3389/fpsyg.2025.1550621. Accessed 6 Aug. 2025.