Freedom, Knowledge, Truth, and Responsibility

#1

One must be free to learn the truth about the world, and they are responsible for ensuring what they have learned is indeed the truth.

However, to what degree should they be responsible for learning the truth? Over the last few years we have seen too often the spread of “fake-news”, resulting in real harm - due to the increasing ease of sharing information to the masses. This is where we strike the debate between censorship and freedom of speech. I think it is undeniable that I should be free to speak my mind, but should there be a mechanism to determine if what I am saying is within the realm of truth or not, and if so, who should be responsible for that? Some regimes around the world have opted for complete and utter censorship, resulting in very limited freedom of speech - an obvious extreme…but in itself created an interesting use case for blockchains (https://etherscan.io/tx/0x2d6a7b0f6adeff38423d4c62cd8b6ccb708ddad85da5d3d06756ad4d8a04a6a2). On the other end, there have been regimes who exert little control on the restrictions of freedom of speech, resulting in the mass spread of fake news.

So I ask you the following -

Should we (the collective) be responsible for establishing the accuracy of content shared on the web?

If so, who should judge the accuracy of that content and how?

What should happen to content that has been flagged as inaccurate?

7 Likes
#2

Great topic, one I’d love to learn more about. On top of my day-to-day job, I also manage information and documentation for my team in a company working in an incredibly fast-paced industry and I can see this being highly relevant now and in the years to come, for just about anyone.

Given the millions of platforms on the web and the various formats of content shared on those platforms in today’s world, it is only right that accurate information be presented to the masses and anything inaccurate, at the very least, be flagged to users of the information, to combat the dual problems of info overload and fake news.

Content accuracy remains subjective due to the proliferation of user-generated content and platforms hosting this content . While any news site can be flagged and come under fire for fake news, it becomes difficult to do the same to individuals for user-generated content
for two reasons - 1) such content can’t usually be parked into a predefined content category (with a single authority monitoring it) and 2) such content is usually interspersed with opinions which may be the most difficult type of content to verify.

One solution I can think of is having that responsibility sit, in part, with anyone accessing that content though I can see how that would open up a whole new can of worms. However, it remains that content should be flagged for accuracy, currency, objectivity amongst others.

Happy to hear other opinions on this!

3 Likes
#3

Even with individual content we see a heavy skewing when it comes to Propogation of that info. Take reddit as an example.

Also most are not of the mind to be personally responsible for their own content consumption. We are in the soundbite era. A heading could be completely misleading but most will not look beyond that.

Interested to see if there could be Sybil resistance for Propogation of information and potential weighting. But then… You could buy weighting haha. We live in a trick world where misinformation is easily able to be propogated.

1 Like
#4

I like how you brought up that most are not of the mind to be responsible for their content consumption. It begs the question - should this be an inherent responsibility of a new-age internet user? Or does that responsibility sit externally? I find that the furore surrounding fake news has enlightened a small percentage of the masses to consider the validity/accuracy/objectivity of the content they consume.

The overall narrative is changing, albeit slowly. Educational institutions are starting to include modules on tech, coding, data and the like from even a primary level. Children in schools are being taught to identify biases in information, conveyed through tone, sources, data etc and though this may be no different to when I was taught the same in school (all those years ago haha!) it has a much stronger and more relevant application in today’s world given @multichain’s scenario. What better teacher than experience!

We find ourselves back at the original question -

Blockquote
should there be a mechanism to determine if what I am saying is within the realm of truth or not, and if so, who should be responsible for that?

I’d say yes, there should be one. But putting that responsibility on the collective (which, to your point @Exposed2u, opens the door to potential weighting/sybil attacks etc) might not be the right answer. At the end of the day, this “judge of accuracy of content”, be it an individual or a body, should always have credibility or authority, based off their knowledge or experience.

How do you see the third question answered in an ideal world? @Exposed2u

#5

I think that many people have lost their ability to think critically about what they see and read. It’s similar to the apathy involved in trusting validators, both in and outside of crypto. So, how do you remove validators from the equation in news articles and broadcasts, without relying on a centralized source to judge the quality of the content you consume? In the publishing world, the self-publishing craze has enabled a few talented writers to publish amazing content, but their stories can easily get lost among the thousands of books that lack any form of quality control. On the web, it would be difficult to impose a blanket on quality content, unless, let’s say, there was a mechanism inside an individual’s web browser that allowed them to input their own quality settings. Taking the responsibility to the commercial sector, suppose an online news organization left all control of their content to a rotating group of journalists from around the country. Is there a decentralized way to verify the accuracy of the facts presented by each journalist without the use of validators? What is a validator in this context? Do journalists that receive less errors in their content, get more articles published? This still puts the responsibility on the individual to choose a news source that is reliable, and not entertainment-news. Interesting topic, and one that will hopefully see many attempts towards a solution in the near future.

#6

Totally, but I think what we are seeing today runs deeper than being critical of the source of information. “Quality” is subjective, measuring it seems like it will introduce the bias of the measurer. No? Obviously this makes it sounds like a great place for blockchains to step in and provide balanced information. But the chain would need this the judge-bias removed during its execution.
You mention a browser-level quality control, what about moving that to a regional ranking system? If an article trends on the block in which it was written then it becomes available to the city it is from. City --> state --> country- -> continent … and so on. With every access level it would bring with it all the comments from all the readers, showing the first comments first, and not the latest comments first.
This way if an article come from the other side of your country, you know it is relevant to enough people to that it has become important for citizens of your country to know it. Expanding from there.

Effectively, the responsibility then is in reading the news. At all. The more the better. as you read more you will help your local journalists, but also absorb information that has been proven to be relevant (or at least entertaining) by the world at large. Just don’t forget to check the comments incase someone disputes it.

1 Like
#7

Exactly. The news-chain would need to publish content like a regular newspaper, but remove the ability for a Managing Editor to censor content. However, there should be a system in place to allow authors the choice of matching their article with an editor/proofreader/fact-checker, who could receive a percentage of an article’s royalties. I love your idea of a regional ranking system. The front page could include a list of all of the latest regional articles. The second page could be a place where you begin to select your favourite journalists, and create your own viewer preferences, which could include top stories from your region. It’s a brilliant idea to have city, state, country, and continent sections with articles that trend. This would allow for important stories to easily spread.

But, we still have the problem of @multichain’s fake news conundrum. A celebrity could post a controversial article on their stance against vaccines, which could easily trend country-wide or globally. This type of viral story can easily convince people that the content is correct, and it can prove to be fatal. Can there be a mechanism like a truth-o-meter that can be applied to an article, where readers can disprove the claims in the article with facts? Could the blockchain reduce the ability of an article to go viral if the stated facts in the article are proven to be wrong? Today, comments sections are widely used, but they haven’t effectively stopped the spread of misinformation. If we use an example of an anti-vaccine article, maybe a disclaimer could appear right at the top of the article stating cold hard facts, such as:

  1. The percentage of medical professionals who believe that vaccines are safe.
  2. Risk of death due to measles = 1 in 1,000
  3. Measles has infected 41,000 children, and killed 37 in Europe in 2018.

The list of facts could rely on the use of primary sources to verify accuracy, such as medical institutions. They could also be ranked by reader polls, so that only the top three corrected facts show at the top of an article, unless you click on a read more button.

There are a lot of interesting ways to tackle this subject. It’s an interesting topic to chat about!