Discussion about AI and Wikis

 Assignment1: When you search for information, do you prefer to use AI or Wikipedia? Why?

In my opinion, the choice between AI and Wikipedia depends on the nature of the task.
If I am starting a formal research project, I definitely prefer Wikipedia. Its biggest advantage is the "References" section at the bottom. It provides a map of credible sources that I can actually cite in my papers. It feels like a stable "anchor" in a sea of information.
However, if I am trying to understand a very complex or abstract concept for the first time, I prefer AI. Tools like ChatGPT or Gemini act like a personal tutor. They can simplify difficult jargon into "plain English." While Wikipedia gives me the facts, AI gives me the intuition. Ultimately, I use AI to learn and Wikipedia to verify.

Assignment 2: Should AI be used to create Wikipedia articles? Read https://slate.com/technology/2023/01/chatgpt-wikipedia-articles.html and https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart and let me know your thoughts

After reading the articles from Slate and Vice, I have a very cautious view on this. I believe AI should not be used to independently create Wikipedia entries.
The Slate article points out a major flaw: "hallucinations." AI is designed to be a "prediction machine," not a "truth machine." If it confidently invents fake citations, it destroys the very foundation of an encyclopedia. Furthermore, as the Vice article suggests, the Wikipedia community relies on human trust and consensus. If we flood the site with AI-generated content, we might burn out the human volunteers who have to check every single line.
In my view, AI should only be a "digital assistant"—helping with translation or fixing grammar—but the final "publish" button must always be pressed by a human who has verified the facts.

Extra Credit: A Quick Comparison
To test the difference, I "asked" both Gemini and Wikipedia about "The Business Model of TikTok." 
 Wikipedia's Response: It provided a very detailed, chronological history of the company, with specific dates, legal cases, and financial figures. Every claim had a little blue number linking to a news article or a government report. It felt official and objective, but it took me a long time to find the "summary" I needed.
AI's Response: In contrast, the AI gave me a bulleted list of five key points (like algorithm, advertising, and e-commerce). It was much faster to read and understood my request for a "summary." However, it didn't show me exactly where it got those specific numbers.
In Conclusion: Wikipedia is a reliable library, while AI is a smart assistant. I need the library to be sure of the truth, but I need the assistant to save my time.

Assignment 3: What is Wikipedia community doing about AIs? Read Artificial intelligence in Wikimedia projects, Wikipedia:Artificial intelligence and Wikipedia:WikiProject AI Cleanup, and let me know what is interesting/surprising

After researching the Wikimedia projects and the "AI Cleanup" group, I found their approach both organized and quite surprising. Here are the points I found most interesting:
1. The "Human-in-the-Loop" Principle What surprised me most is that Wikipedia isn't "anti-AI." Instead, they have a strict rule called "Human-in-the-loop." This means that while AI can help suggest edits or find typos, a real human must always review and take responsibility for the final change. It shows they value accountability over speed.
2. The "WikiProject AI Cleanup" I found it fascinating that there is a specific group of volunteers dedicated to "cleaning up" AI-generated content. They look for "telltale signs" of AI, such as overly polite language or repetitive sentence structures. It's like a digital cat-and-mouse game where humans are trying to protect the "human touch" of the encyclopedia.
3. Using AI to Fight Vandalism Interestingly, Wikipedia has been using its own AI (like ORES) for years to detect vandalism. It's a bit ironic: they use "good AI" to protect the site from "bad AI" or low-quality automated edits. This proves that they see AI as a tool for maintenance, not a tool for creation.

Assignment4:
For this task, I asked an AI to generate an article about "Short-form Video Monetization Models" and compared it to a standard Wikipedia entry.
To illustrate the difference, let's look at how both platforms handle a specific topic like TikTok's Creator Fund.
The AI: When I asked the AI about this, it quickly gave me a very clear, organized summary. It explained that TikTok pays creators based on views and engagement to encourage high-quality content. It even gave me a "pros and cons" list in seconds. However, the information was static and general. It didn't tell me exactly how much money is currently in the fund or if the policy changed recently. It felt like a textbook definition—easy to understand but lacking "real-time" evidence.

Wikipedia: In contrast, the Wikipedia page was much more "dense" and harder to skim, but it provided specific, verifiable data. For instance, it cited a news article from TechCrunch or The Verge stating that TikTok pledged $1 billion over three years. It also included a "Controversy" section with citations from famous creators who complained about earning only a few cents for millions of views. Each of these claims had a tiny blue number (a citation) that led me to the original source.

The difference is clear, the AI is like a smart classmate who summarizes the main idea for you during a break, while Wikipedia is like a library archive where every sentence is backed up by a physical document. For my blog or a university presentation, I would use the AI's structure to build my outline, but I would only trust the numbers and criticisms I found on Wikipedia because I can actually prove where they came from.

Comments

Popular posts from this blog

Introduction to the blog

WEEK2— WANG SIWEN

WEEK2 ——edits on Wikipedia