People lie, people are lazy, people are stupid
For example, “….I am not against the Semantic Web. But from Google’s point of view, there are a few things you need to overcome, incompetence [of the user] being the first,” “second problem is competition. I’m the leader, why should I standardize?” “third problem is one of deception. We deal every day with people who try to rank higher in the results”
Or as Doctorow puts it succinctly in a few of his points: People Lie, People are Lazy, People are Stupid, and Schemas aren’t neutral.
Classification systems are always affected by a given world-view and are always political in some sense, as discussed in depth in Sorting Things Out: Classification and Its Consequences by Geoffrey C. Bowker, Susan Leigh Star
“With less human oversight with the Semantic Web, we are worried about it being easier to be deceptive” is actually reminiscent of Doctorow’s Human Readable.
Berners-Lee agreed that deception is a problem and stated “part of the Semantic Web is about identifying the originator of information, and identifying why the information can be trusted, not just the content of the information itself.”
This latter has information literacy written all over it. But how to artificially ascribe legitimacy? Social trust networks – like slashdot or digg ratings? If more people say it’s “the truth” than it is? E.g. how do I know this site about fluoride is legitimate?
And are these all just “alternate truths?” An artificial/social networked trust system is a plot tool for many authors – off the top of my head, Down and Out in the Magic Kingdom, Doctorow, and seems like Stross’ Accelerando as well?
Entry filed under: feed my pet brain.