AI and Wikis - NGUYEN KIM CHI(응웬김찌)
Assignment 1:
When I search for information I usually prefer using AI over Wikipedia, but it really depends on what I need. AI is fast, convenient and can explain things in a simple and personalized way. For example, when I was studying sociology and trying to understand "Goffman’s theory", I used AI to break down concepts like “front stage” and “back stage” into easy explanations with examples from movies like The Truman Show. It felt like having a tutor guiding me step by step.
However, I still rely on Wikipedia when I need more detailed and reliable information. For instance, when I was working on a project about influencer marketing, I used Wikipedia to read about the history and definitions of influencers, because it provides structured content and references that I can trust and even cite in my assignment. I think the best way is to combine both. I often start with AI to quickly understand a topic, then use Wikipedia to check facts and explore deeper. This combination helps me learn faster while still staying accurate.
Assignment 2:
After reading the articles about whether AI like ChatGPT should be used to write Wikipedia articles, I think AI should be used, but only as a tool not as a replacement for humans. If used carefully and responsibly, it can become a useful assistant in the process of creating knowledge.
As mentioned in the articles, AI can be very helpful. Tools like ChatGPT can help editors overcome writer’s block and create a basic draft quickly. For example, a Wikipedian used ChatGPT to generate an initial version of an article, then improved and verified it. This shows that AI can save time and make the writing process easier, especially for volunteers who may not have enough time.
However, I strongly agree with the concerns raised in the readings. AI often generates false or fabricated information, especially fake sources that look real but do not actually exist. This is very dangerous for Wikipedia, because Wikipedia’s core value is verifiability. If editors rely too much on AI without checking the quality of information could decrease and misinformation could spread easily.
As a bonus, I also asked AI this question. Interestingly, AI gave a similar answer: it suggested that AI should be used as a supporting tool, not as the main writer, and emphasized the importance of human verification. However, I think this also shows a limitation AI tends to give balanced and safe answers, but it does not fully experience the real risks and responsibilities that human editors face.
Assignment 3:
After reading about how Wikipedia is dealing with AI, I found it really interesting that the community is not completely rejecting AI but carefully controlling it.
First, what surprised me the most is that Wikipedia has actually banned using AI to write or rewrite articles. Recently, the community decided that AI-generated content often violates core principles like accuracy, neutrality, and verifiability. However, AI is still allowed in limited ways, such as translation or small copy edits, as long as humans review everything.
Another interesting point is that Wikipedia is not new to AI at all. They have been using bots and machine learning since early 2000s for example, to detect vandalism or evaluate article quality.
So the issue is not “AI vs no AI,” but rather what kind of AI use is acceptable.
The most surprising part for me is the creation of WikiProject AI Cleanup. This is a group of volunteer editors whose job is to detect and fix AI-generated content.
They even have guides to identify AI writing, such as:
- fake or unrelated citations
- overly generic or confident tone
- strange formatting or repeated phrases
If an article looks like it was written by AI without proper sources, it can even be deleted quickly (“speedy deletion”).
What I found most interesting is how serious Wikipedia is about human responsibility. Even when AI is used, every piece of information must be verified by humans with real sources. Some editors even say that AI content is dangerous because it can sound correct but be completely fake.
Overall, Wikipedia’s response is defined by one key idea: AI can assist, but it cannot be trusted independently. Human verification remains the foundation of the platform.
Assignment 4:
I asked ChatGPT to create a Wikipedia-style article on Mesopotamian Art, and it produced a clear and well-structured result. At first glance it looked impressive organized, readable, and easy to understand. However, when I compared it with the original Wikipedia article I had worked on before, several differences became clear.
The AI version is strong in structure and readability. It presents the topic in a simple and logical way, making it accessible for beginners. However, the content is mostly surface-level. While it correctly mentions key civilizations and examples, it lacks deeper details found in the original article, such as cylinder seals, symbolic systems, and the central role of sculpture.
Another limitation is academic depth. The Wikipedia version provides more detailed analysis and historical context, while the AI version reads more like a general summary.
The biggest issue is the lack of citations. Unlike Wikipedia, which requires verifiable sources, the AI article does not include proper references, making it unsuitable for actual publication.
Overall, ChatGPT created a good overview (about 7/10) clear and informative, but not yet at Wikipedia standard. It is useful for drafting ideas, but still needs human editing for depth, accuracy, and reliability.
Comments
Post a Comment