Matthew Kovacev

Summary

Overview: Hello everyone! My name is Matthew Kovacev, and I am currently researching AGI policy and governance in regards to the technology behind it as well as policy ramifications. I am also skilled in performing data analysis, research, and machine learning.

AGI Policy Research at The Millennium Project

AGI Policy onAir Hub Administrator

Contact me at matthew.kovacev@onair.cc

OnAir Post: Matthew Kovacev

About

I have over 3 years of experience in diverse relevant skillsets such as fundraising, outreach, quantitative research, statistical analysis, and budgeting. This experience came from various roles including a local campaign and an internship with an international security magazine.

I am very skilled in writing, research, mathematics, and statistical methods. In addition to my skills in written and oral communication and networking, I also maintain experience working extensively with software like Microsoft Suite (Word and Excel), Stata, Python, and SPSS. With my robust skillset, I am ready to work and thrive with any government agency, nonprofit organization, interest group, or any other organization that serves communities who need it the most or allows governments to operate smoothly!

Web Links

The Dangers of Weaponizing Artificial Intelligence

AI technology has brought many benefits to the world, but could weaponizing AI be a mistake?

Many advancements in AI technology have been made in recent years, especially in the military. While most of these advancements have come in the form of reconnaissance and defense, AI is also being used on the attacking end. This raises significant ethical concerns due to the limitations of AI technology, specifically in understanding context and the difference between correlation and causation. Additionally, the efficacy of AI is questioned due to these limitations. These dilemmas in integrating AI into the military have led to much skepticism and hesitancy. These concerns are very valid, as the limitations of AI and its propensity to error can potentially have a massive financial and human cost.

As it exists today, AI is by no means perfect. AI technology currently has many limitations, including the fact that AI is narrowly focused and often has trouble with detecting context. This means that currently, an AI-controlled missile defense system would have trouble differentiating a regular missile and one with a nuclear warhead. In battle, an AI-controlled robot dog would also not be able to differentiate between a soldier and a civilian. An error by these AI systems can have devastating consequences.

AI does not have any sense of emotions or morals outside of their programming. This means that AI would have no knowledge of international laws or customs, only acting based on what they were programmed to do. A human would know that killing civilians would be a war crime and thus morally wrong. An AI-controlled drone would not be able to do such a thing. That is why drone strikes often require human intervention to detect targets and reduce the risk of collateral damage. The risk of unnecessary death and destruction is already significant with humans. The inclusion of AI technology would only exacerbate these risks.

It is extremely easy to fool AI right now. AI uses patterns to recognize basic objects, such as a stop sign. All it takes is a sticker with a certain pattern on it to fool AI into interpreting something entirely different. This exploit can be taken advantage of on the battlefield, as the enemy can potentially use stickers on military vehicles so that the AI detects regular cars or trucks. Because it is so easy to fool AI, this makes the technology ineffective and easy to abuse. Thus, it might not be the best idea to put AI on the battlefield.

Because of AI’s current limitations, current AI applications in the military would most likely take place outside of the battlefield. A soldier fighting today is unlikely to be faced with AI-controlled robots or drones at the present moment. However, since the U.S., China, and Russia are developing and investing in AI technology, this may soon change. Unfortunately, Russia has proven to be rather unpredictable in recent months, so it is likely that this AI technology can potentially be abused by Moscow in horrific ways.

AI is a fantastic technology with a bright outlook. It is rather versatile and has potential applications in various fields, namely in the military. However, since there are many issues and limitations with AI as it currently exists, its use in military combat would be catastrophic. Despite this, AI could possibly yield excellent results in defense and reconnaissance, so long as it is aided by human intervention. AI is a relatively new technology that often requires humans to function properly. As long as humans have a say in AI, there should be less room for error and more room for improvement. Hopefully when these limitations in AI technology are improved, we could see the implementation of more effective and more ethical technology both within and outside of the battlefield.

How The Internet Facilitates the Spread of Misinformation

Source: Modern Diplomacy

The internet has made credibility harder to detect, opening the door for misinformation and conspiracy theories.

The internet has led to great advances, namely the dissemination of massive amounts of information to billions of people around the world. However, this glut of information has made it easier for malicious actors to publish misinformation and conspiracy theories.

This was seen during and after the 2016 and 2020 US Presidential Elections, where various actors on both sides engaged in propaganda and misinformation campaigns online. After Donald Trump was elected in 2016, Democrats claimed that Russian foreign actors intervened in the election illegally by posting misinformation online. These claims were proven false in a lengthy investigation by Robert Mueller, but there was no doubt quite a bit of misinformation spread in favor of Donald Trump.

In 2020, Joe Biden defeated Trump in that year’s presidential election. Misinformation and accusations from Trump that the Democratic party permitted widespread voter fraud were partially responsible for the storming of the Capitol on January 6th, 2021. The destruction displayed in the most important federal building in the country showed how dangerous misinformation can be to the liberal democratic system.

The most prolific of such misinformation campaigns and conspiracy theories is QAnon, a far-right conspiracy theory started on 4chan by an anonymous user that claims that Democrats are Satan-worshipping pedophiles. Machine learning has since been utilized to find the man responsible for this conspiracy theory. While relatively small, the following garnered by this anonymous user on an obscure imageboard left a considerable mark on modern political discourse in the United States.

Misinformation is not a new phenomenon, as it fueled the satanic panic in the 1980’s among other conspiracy theories. It is possible that fact checking has been made easier through the internet, therefore putting an end to such conspiracy theories and misinformation campaigns. However, the rise of QAnon and claims of voter fraud in the 2020 election have suggested otherwise.

The internet, in allowing just about anyone to publish their opinions on the world stage, has opened the door for a substantial increase in the spread of misinformation. It used to be very difficult to do such a thing as an ordinary person. In the information age, it is relatively easy to go viral and broadcast your opinions to the world. While this has opened discourse to new opinions, many of these opinions are rooted in falsehood.

Some may think that misinformation and conspiracy theories are isolated to the far right. As was seen after Trump’s election in 2016 and the resulting claims of Russian collusion, this is not the case. Some on the left are also guilty of making outlandish claims about their opposition. Additionally, not even centrists are immune to misinformation, as much information about the ongoing war in Ukraine, namely that of Snake Island and the Ghost of Kyiv, has been contested or even proven false. These lies were perpetuated by social media sites such as Twitter and Reddit. Even those on the ideological center were spreading these claims, not knowing the dangerous consequences of spreading unfounded claims.

The internet is truly a blessing. It has allowed us to see the world through many different perspectives. It is also a curse, as many of these perspectives are at best false and at worst dangerous. That is why fact checking by independent and unbiased sources is important in today’s society.

Because of deeply rooted biases in virtually every news source, even prominent news sites like the New York Times and FOX News are not immune to spreading misinformation. Fact checking before making claims online is necessary and vital because the effects of spreading misinformation can cost lives, as was seen on January 6th. If we make a conscious effort to check claims before responding, we will be more informed and safer overall. In short, be careful what you read online. You never know when you will be exposed to misinformation online.

How 4chan Radicalizes Youth and Grooms Them Towards Terrorism

Source: Modern Diplomacy

The image board was started in 2003 to discuss anime and various other topics but festered into a safe space for hateful rhetoric soon after. In the aftermath of yet another racially motivated mass shooting by a frequent user, its dangers have finally reached the mainstream.

4chan is an extremely unique website. It has been running since 2003, and over the course of almost 20 years, has influenced many internet memes and phenomena. However, in the wake of the European Migrant Crisis in 2015 and the 2016 Presidential Election, it became associated with white supremacy, especially on its /pol/ board. This hateful rhetoric festered, worsening in 2020 during the COVID pandemic and George Floyd protests. 4chan was sprung into the spotlight once again on May 14th, 2022, when a white supremacists livestreamed his massacre of a supermarket.

This attack, fresh in American’s minds, led many to question why 4chan is still allowed to exist. This comes after 4chan’s rhetoric inspired a 2015 mass shooting in Oregon and its users aided in the organization in the Unite The Right Rally and the January 6th Riots. Clearly 4chan is a hotbed for far-right terrorism. But why is this image board the way it is? The answer lies in its lax moderation of content.

Upon looking at 4chan, you will find it is mostly made up of pornography. However, if you go on the site’s /pol/ board, it does not take long to find the kind of rhetoric that radicalized the Buffalo shooter. One particular post I found featured a racist joke at the expense of Black people. Another was praising fighters in the Ukrainian Azov battalion while joking about killing trans people. Yet another post complained about an “influx of tourists” due to the Buffalo shooter, who they insulted with an anti-gay slur. These memes and jokes seem to appeal to a younger, perhaps teenaged audience. It is clear that they are still trying to recruit youth into their ranks even after the tragedy in Buffalo.

The content is, to say the least, vile. The fact that this stuff is permitted and encouraged by not just the userbase (which numbers in the millions) but also many moderators tells us that there is something fundamentally wrong with 4chan. In fact, copies of the livestreamed Buffalo massacre were spread widely on 4chan to the amusement of its userbase.

Many of the users on 4chan are social rejects who feel as if they have nothing to lose. They feel unaccepted and alienated from society, so they turn to 4chan. Many harmful ideologies, such as White supremacy and incel ideologies, seem extremely validating for these dejected youth.  Young, socially alienated men, who make up the majority of 4chan’s userbase, are also among the most vulnerable demographics for radicalization.

What can we do to prevent further radicalization of youth and deradicalize those already affected by harmful rhetoric? First of all, we need to either heavily regulate 4chan or have it shut down. There is no space on the internet for this kind of hatred or incitement to commit horrific acts like what happened in Buffalo. For those already radicalized, we need to perform a campaign of deradicalization among those affected by this rhetoric. But how can this be done?

4chan prides itself on anonymity, so it is difficult to figure out who uses it. Thus, education on radicalization and identification of propaganda is vital. This education should focus on adolescents mostly due to their predisposition towards radicalization when exposed to hateful rhetoric. While White supremacy must be emphasized, other forms of radicalization should be mentioned as well such as Jihadism and other forms of ethnic supremacy. Finally, tolerance must be fostered among all people, not just those at risk of becoming groomed into terrorism.

The age of 4chan has spawned many humorous memes, but it has since become a hotbed for hatred and terrorism. Since memes are able to convey dangerous ideas, websites like Reddit and Facebook need to be heavily regulated to prevent the dissemination of dangerous misinformation. It is unlikely that 4chan will ever moderate itself, as lack of strict moderation is its defining feature. Thus, it has overstayed its welcome and no longer has a place in today’s information-driven society.

Discuss

OnAir membership is required. The lead Moderator for the discussions is Matthew Kovacev. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar