top of page

Register For Updates

What is a Technology?

Something can be called a technology when it's a practical application of scientific knowledge used to solve problems or perform tasks.


Is Artificial Intelligence a Technology?

I know people would instinctively answer this as yes, but I want to lay out the details.


Artificial Intelligence (AI) is considered a technology. It's a field that applies scientific principles and computational methods to create machines and software that can perform tasks requiring human-like intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding. AI is a practical application of various scientific disciplines, including computer science, mathematics, and engineering, to solve real-world problems or accomplish tasks that traditionally require human intelligence. Therefore, it fits the definition of technology as a practical application of scientific knowledge.


LLMs like ChatGPT, Bard, and Claude fall within this, they are specialized applications of Artificial Intelligence, focused primarily on natural language understanding and generation. Utilizing principles from computer science, linguistics, and machine learning, LLMs are designed to interpret and produce human-like text based on the data they've been trained on. These models serve various purposes, from answering questions and providing explanations to aiding in research and creative writing. As such, they fit the definition of technology as a practical application of scientific knowledge aimed at solving specific problems or accomplishing tasks.


The main difference between describing AI in general and LLMs specifically would be the narrower focus on the tasks LLMs are designed for, which is predominantly natural language understanding and generation.


Are LLMs (like ChatGPT) Anything Other Than A technology?

A bit of an open question, but I wanted to consider where that might go...


Large Language Models (LLMs) like ChatGPT are primarily considered a form of technology, as they are applications of scientific and computational principles designed to perform specific tasks—in this case, natural language understanding and generation.


However, the impact and implications of LLMs extend beyond just being a technology:

  1. Research Subject: They are also subjects of academic research, studied for their capabilities, limitations, and ethical considerations.

  2. Cultural Influence: LLMs are starting to have a cultural impact, as they influence how people interact with technology and with each other online.

  3. Ethical Concerns: They raise ethical questions, particularly concerning data privacy, misinformation, and the potential for bias.

  4. Economic Factor: LLMs can be seen as an economic asset or commodity, as companies use them to offer services, generate revenue, or gain competitive advantages.

  5. Educational Tools: In educational settings, LLMs are used as supplementary tools for teaching and research.

  6. Legal Considerations: As they become more integrated into decision-making processes, they may also come under legal scrutiny for how they impact individual rights or societal norms.

  7. Artistic Medium: Some artists and writers are beginning to use LLMs as a form of collaborative tool for creative expression.

  8. Human-Computer Interaction: They also represent a step forward in human-computer interaction, shaping how people will interact with machines in the future.

So while LLMs are fundamentally a technology, their influence is multi-dimensional, making them relevant to various disciplines and societal considerations.


Scientific Knowledge

What distinguishes scientific knowledge from other knowledge?


Scientific knowledge is distinguished by its systematic approach to understanding the natural or social world. It relies on empirical evidence (data collected through observation and/or experimentation) and is subject to principles such as falsifiability, repeatability, and peer review. Key features that set scientific knowledge apart from other forms of knowledge include:

  1. Methodology: The scientific method involves forming hypotheses, conducting experiments, and drawing conclusions based on evidence.

  2. Objectivity: Attempts are made to minimize subjective bias through controlled experiments and peer review.

  3. Falsifiability: Scientific theories and hypotheses are testable and can be proven wrong, which is a criterion for their scientific validity. I dig a little deeper into this in a section below.

  4. Repeatability: Experiments must be replicable by other researchers to verify results.

  5. Predictive Power: Scientific theories often allow predictions to be made, which can then be tested for accuracy.

  6. Peer Review: Research is reviewed by experts in the field before it is widely accepted or published.

  7. Cumulative: Science builds upon previous discoveries, allowing for a continually growing body of reliable knowledge.

  8. Quantifiable: Where possible, scientific knowledge often relies on data that can be measured and quantified.

These criteria help ensure that scientific knowledge is reliable, consistent, and free from personal or cultural biases, setting it apart from other forms of knowledge like intuition, tradition, or anecdotal evidence, see the expanded list below.


Other Forms of Knowledge

Besides scientific knowledge, there are several other forms of knowledge that people rely on for understanding and interacting with the world. Here are some:

  1. Personal Experience: This is knowledge gained through one's own lived experiences. While it may be subjective and unique to the individual, it can be profoundly insightful and practical.

  2. Intuition: This is a form of knowledge that comes from an instinctual "gut feeling." Intuition can sometimes guide effective decision-making, even when the reason is not immediately clear.

  3. Tradition: This is knowledge that is passed down from generation to generation. While not scientifically validated, traditional knowledge, including folklore or old wives' tales, can offer practical wisdom.

  4. Religious or Spiritual Knowledge: This is based on the teachings of spiritual texts, doctrines, or experiences, and while not empirically proven, can be deeply meaningful for those who believe in them.

  5. Philosophical Knowledge: This comes from logical reasoning and critical thinking, often exploring questions that may not be empirically testable, such as the nature of morality or existence.

  6. Empirical or Observational Knowledge: While similar to scientific knowledge, this type is often less rigorous and can come from casual observations and basic data collection without the formal scientific method.

  7. Expertise or Skill-based Knowledge: This is specialized knowledge gained through extensive practice or education in a particular subject or activity, also known as "know-how."

  8. Common Sense: This is practical knowledge for everyday problem-solving that may not be rigorously tested but is generally accepted and understood.

  9. Legal Knowledge: This comes from understanding laws and regulations, either through formal study or lived experience.

  10. Cultural Knowledge: This is knowledge about social norms, traditions, and practices within a particular group or society, often passed down through social interaction.

  11. Historical Knowledge: This is an understanding of past events, cultures, and circumstances based on historical records and interpretations.

  12. Aesthetic Knowledge: This is an understanding of beauty, art, and taste based on personal or cultural criteria rather than empirical evidence.

  13. Ethical or Moral Knowledge: This concerns what is right and wrong based on ethical principles, which could be personal, societal, or philosophical in nature.

Each of these forms of knowledge has its own value and limitations, and they often interact and overlap in complex ways.


Falsifiability

The principle of falsifiability, introduced by philosopher Karl Popper, is a criterion for determining whether a statement or theory can be considered scientific. Falsifiability doesn't mean that a theory is false; rather, it means that the theory is testable—it can be subjected to experiments or observations that could potentially prove it false.


For a theory to be considered scientific, there must be some conceivable empirical test that could show the theory to be false. If no such test exists, the theory falls outside the realm of science, even if it may still be of philosophical or other significance.


Here's a simple example: The statement "All swans are white" is falsifiable because you can test it. If you find a single swan that is not white, then the statement is proven false. On the other hand, the statement "Swans are usually white" is less easily falsified, as exceptions do not necessarily disprove the rule.


The importance of falsifiability lies in its ability to separate scientific knowledge from other types of assertions. For instance:

  1. Limits Subjectivity: When a theory is falsifiable, scientists have a means of challenging it with empirical evidence, reducing the impact of individual or cultural biases.

  2. Encourages Experimentation: If a theory can be proven wrong, then it invites researchers to perform tests to either confirm or disprove it, thereby driving scientific progress.

  3. Adaptability: If a falsifiable theory is proven wrong, it can be modified or replaced, allowing science to adapt and refine its understanding over time.

  4. Accountability: Falsifiability ensures that theories are subject to ongoing scrutiny and are not just accepted as fact without challenge.

  5. Practical Applications: Falsifiable theories can be tested and applied in real-world scenarios, such as in medicine or engineering, to solve problems or create new technologies.

So, the principle of falsifiability is not about proving theories right but about providing a structured way to challenge them. Theories that withstand repeated tests gain credibility but are never considered absolutely "true" in a scientific sense; rather, they are considered "not yet falsified."


Falsifiability and Artificial Intelligence (AI)

The principle of falsifiability plays a significant role in the development and validation of artificial intelligence (AI) models. Here are some examples to illustrate its importance:

  1. Algorithmic Performance: One common assertion in AI is that a certain algorithm performs better than another in solving a specific problem. This is a falsifiable claim. By using a standard dataset and performance metrics (like accuracy, precision, recall, etc.), researchers can empirically test which algorithm performs better.

  2. Natural Language Processing (NLP): In the case of NLP, a claim might be that a particular model can understand and interpret human language with an accuracy rate of, say, 95%. This is falsifiable through tests that evaluate the model's performance against human-annotated benchmarks.

  3. Image Recognition: If an AI model claims to identify a certain type of object in images with a 99% accuracy rate, this is a falsifiable statement. Tests can be designed using a diverse set of images to see if the model meets or fails this accuracy level.

  4. Fairness and Bias: Researchers often claim that their AI models are "fair" or "unbiased" when it comes to race, gender, or other sensitive variables. These claims are also falsifiable, often tested by running the model on a dataset that is diverse with respect to these variables and checking whether the model's predictions are indeed unbiased.

  5. Autonomous Vehicles: Claims regarding the safety levels of autonomous driving systems are falsifiable. These could be tested under a variety of conditions (rain, snow, fog, etc.) to confirm whether they meet the stated levels of safety.

  6. Predictive Analytics in Healthcare: If an AI model claims to predict patient outcomes in a clinical setting with a certain level of accuracy, that claim can be falsified by applying the model to real-world data and comparing its predictions to actual outcomes.

  7. Reinforcement Learning: In gaming or robotic control, a falsifiable claim might be that a reinforcement learning model can learn a task to a certain level of proficiency within a specific number of trials. This can be tested by running the model and counting the trials it takes to reach that proficiency.

  8. Generative Models: If a generative model claims to produce new data that is indistinguishable from real data, this is a falsifiable claim. Tests can be designed to compare the generated data with real data across various metrics.

By applying the principle of falsifiability to these aspects of AI, researchers can empirically validate or refute claims, driving the field forward by eliminating models or approaches that don't work and improving upon those that do.


Falsifiability in the Development of LLMs Like ChatGPT

In the development and evaluation of large language models (LLMs) like ChatGPT, the principle of falsifiability can be applied in various ways, especially when it comes to evaluating the performance and capabilities of the model. Here are some examples:

  1. Accuracy: An assertion might be that ChatGPT can answer factual questions with an accuracy rate of X%. This is a falsifiable claim, as one can administer a set of factual questions to the model and check the percentage of correct answers against verified information.

  2. Contextual Understanding: Another claim could be that the model can understand and respond appropriately to context in a conversation. Researchers can design tests involving context-heavy dialogues to falsify or validate this claim.

  3. Sentiment Analysis: If the developers assert that ChatGPT can accurately identify the sentiment of a text input (e.g., positive, negative, neutral) at a certain rate, this claim is falsifiable. It can be tested against a labeled dataset for sentiment analysis.

  4. Response Time: Claims about the model's response time (e.g., "ChatGPT can generate a reply within Y milliseconds") are also falsifiable, testable through time-tracking software during the conversation.

  5. Safety and Bias: If it's claimed that the model has been fine-tuned to not produce harmful or biased outputs, this is a falsifiable assertion. Tests can be conducted using inputs that have been problematic for other models to see if this model performs differently.

  6. Customization: If the model claims to be able to adapt its language style or tone based on user instructions, that's another falsifiable claim. One could create a test suite of instructions asking the model to respond in different styles or tones (e.g., formal, casual, humorous, etc.) and evaluate the outputs.

  7. Specific Capabilities: Some LLMs claim to perform specific tasks like summarization, translation, or code generation at a certain level of proficiency. These claims can be rigorously tested against human-generated benchmarks to check their validity.

  8. Generalization: If a claim is made that the model can generalize across various domains or topics effectively, this too can be tested by posing questions or tasks from a broad range of subjects and evaluating the model's performance.

By adhering to the principle of falsifiability, developers and researchers can scientifically evaluate the strengths and limitations of LLMs like ChatGPT. This ensures that the model is being improved upon based on empirical evidence, rather than assumptions or anecdotal experiences.


Closing Comments

I used ChatGPT for the above, with some editing, and could happily have continued to dig deeper. It is like starting a web search for something and then seeing something else of interest (ok, algorithms are probably at play!).


For the image I used above, I tried Midjourney, Adbobe Firefly, and Dall.e 2, using a prompt along the lines of "create a positive image of technology". None were interesting to me. Midjourney was dark, and the others just looked like odd modifications of photos I could easily source from stock images - which is where I ended up, but...


I chose the image above because, during my ChatGPT conversation, it identified the light bulb as an example of technology. The surprise was that Adobe flagged the image as generated by AI - clearly, my prompt techniques need work!

Kommentare


Join our mailing list

Thanks for subscribing!

bottom of page