Ransomware Attacks Cripple Businesses Around the U.S.

On October 19th, San Bernardino City Unified School District in California was the victim of a ransomware attack that locked access to some district files. A hacking of a school district’s system might seem odd, but there have been upwards of 62 ransomware attacks at school districts this year alone.

Ransomware is malicious software (also known as malware) that is made to block access to a computer system or data until the victim pays a ransom. Cybercriminals often implant it through phishing emails or it is placed into a website so that the visitor of the site will become infected.

These attacks can cause data to be lost, and even if the ransom is paid, the victim may never get the files back that were locked. Although the ransomware attack at this California School District only caused a minor inconvenience, requiring attendance to be taken manually for the day, we see potentially more devastating effects of these attack when places like 911 dispatch centers are targeted.

In March of this year, a 911 dispatch center in a small Georgia town was hit with a ransomware attack that shut down all of their computers. Near the same time, the county jail lost the ability to open cell doors electronically, and sheriff’s deputies couldn’t access their laptops.

Our society has become so dependent on technology, that even though we used to do all of these very important jobs without it, we no longer have the tools needed to do so because they haven’t been used in so long that none of these places have them anymore, like ticket books for hand-written tickets.

Other places affected by ransomware attacks include Lake City, FL; Riviera Beach, FL; Baltimore, MD; Cleveland Hopkins International Airport; Augusta, Maine; Tallahassee, FL; Albany, NY; Jackson County, GA; Atlanta, GA; and the Colorado Department of Transportation, shown in the map below.

Notice that I didn’t mention exact places for most of these attacks. That’s because they were attacks that affected many important city systems. If that doesn’t scare you, I don’t know what will. The idea that a computer virus can cripple multiple city offices in such an effective way so quickly is definitely very concerning.

Because cybercriminals tend to look for the best opportunities to get the ransom they want when they attack, the next big concern for ransomware attacks could be Cyber Monday. If a retailer’s systems were down for any period of time that day, they would be losing a lot of money and may be more willing to pay the ransom and do so quickly to get their system back up and running.

As these cybercriminals continue to develop their craft, we as a nation must do our job to keep pushing for better data protection and information security, as well as be educated as tech users to make sure that we are as safe as we can be as individuals online. It only takes one person to get caught in the sticky web of malware attacks for a whole city to be left in the dark.

Artificial Intelligence Can Amplify or Fight Fake News

Yesterday, Facebook CEO Mark Zuckerberg pled his case on his stance on free speech at Georgetown University, where he tried to justify his decision to not regulate the content of political speech on Facebook.

Shockingly, Facebook says that they will allow campaign ads that contain false information, but they will remove them for having other aspects that go against their ad policies, like the use of fake buttons. He claimed that regulating political speech is a slippery slope that could lead to reduced speech rights and that the public has a right to see these false ads and navigate this discourse uninhibited.

This only serves to feed the fake news epidemic. With the rise in AI technology, this can be especially detrimental to our society. Although we have some ability to hinder the spread of fake news, a lot of the control rests with our service providers, and now potentially AI.

If you’re new to this topic, let me get the most shocking news out there right away: some of the articles you’re reading online to get your news have been written by AI technology, NOT a journalist.

While one might think that letting a machine do the work of writing these stories would make them more accurate like solving a math problem on a calculator does, this is not always the case. AI is only as good as the information we give it, and this is still controlled by humans which leaves room for biases to make their way in once again.

Despite this, AI can also be used to fact-check articles and identify machine-generated text, which was looked at in a recent study because many fake news stories are made by these machines so that they can be easily spread on social media. With social media outlets refusing to do these things for us where political speech is involved, this technology could be invaluable.

How do you feel about AI in journalism? Do the pros outweigh the cons? Should our social media platforms be using it to fight fake news or continue to let it exist? If it isn’t their job to regulate this speech and stop the spread of false information, where does the responsibility lie?

What is AI? And Where is it Going?

Yesterday, tech company Nvidia’s stock jumped 5.3% due to a big opportunity for the company to benefit from a new trend in AI: being able to listen to, understand, speak, and give context to our speech.

With the rise in AI technology and improvements to it, it is a topic worth breaking down. So, what exactly is AI?

Artificial Intelligence (AI) is a branch of computer science that works with the intelligence of computers. This is mostly done through the use of machine learning algorithms and large-data sets.

These algorithms allow AI to take the large amount of information they are given, sort it, identify patterns within it, and use this information to learn how to do other tasks. Yes, you read that right. AI is actively LEARNING new things and applying them on its own.

To better understand AI and how it is used, I’m going to go through a condensed timeline.

1956

The term “Artificial Intelligence” is coined at Dartmouth College at a conference.

1974-1980

The “AI Winter“. Several reports are published criticizing AI which leads to reduced government funding and interest.

1980s

AI experiences a revival of interest as the British government starts refunding efforts in the field to get ahead of Japanese efforts with AI.

1987-1993

People lose interest in AI again and funding decreases once again during a market collapse.

1995

Mercedes creates the first self-driving car with AI technology.

1997

IMB system called Deep Blue beats the grandmaster of chess of the time. Here’s an interesting flip-side to this AI milestone.

2011

IMB system named Watson beats the two Jeopardy champions at the time, and Apple’s Siri is created by SRI International.

2014

Chatbot named Eugene makes judges think he is a human during a Turing test, which was used to evaluate if a machine was intelligent at the time and was developed in the 1950s.

2015

Deep learning systems manage to become better at image recognition than humans, allowing AI to be used to determine which crops are ready for harvest and which are not.

2016-2017

in 2016, Google’s AlphaGo software beats the best player of the board game Go, which is much more complex than chess. The following year, a newer version of the game was released and after three days of playing matches against only itself, the software beat the previous type of match against the best player 100-0.

This is important because sometimes we can’t supply AI with the data needed to solve certain problems, but if it can learn from it’s own experiences and train itself, so to speak, it can open many doors for this technology.

This allowed for more research into Dueling Neural Networks and Generative Adversarial Networks (GAN). GAN makes two systems test each other on something in a sort of game until they become so accurate they can beat the other system at the task.

One way this can be done is with image composition, where the systems have to give a real or fake image to the other system until the other system cannot tell that it is not a real image. Even if you’re only slightly geeky about tech this probably excites you, even if just a little bit.

2019

So what’s happening now? We are seeing a rise in the use of machine learning which builds upon 2018 findings. It allows the machines to learn by themselves, saving scientists all of the time that used to be spent inputing massive amounts of data.

We are also seeing facial recognition technology being used on a wider scale. It is being used in our phones instead of passcodes, on Facebook to automatically tag people in pictures, and it is even being adapted for use the medical field to more easily diagnose patients.

In the computer world, AI chips are allowing regular computers to have some AI capabilities, like facial recognition and machine learning. This most recent way to use facial recognition could soon become an issue for us in the U.S.

Cloud Computing is a rising topic within AI. Cloud Computing delivers things like servers, storage, databases, networking, analytics, software, and intelligence via the internet.

Other applications for AI currently include using it for transportation services like Lyft to minimize wait times; fraud detection, check depositing, and business and personal credit card screenings in banks; personalizing suggestions on streaming services like Netflix; and automating business processes that are otherwise time consuming for employees.

Future

About 15% of businesses use AI today, but an Adobe survey indicates that upwards of 30% of businesses surveyed plan to add AI technologies this year. It seems this complex, somewhat scary, yet very useful tool won’t be going anywhere anytime soon.

The world is also working towards finding a way to use multimodal learning with AI. Currently, each system works independently, but this concept would allow different systems to work together thus increasing the amount of info they have access to and how much they can process at a time.

This technology could be introduced into the auto industry for car computers, for security and payment authentication in businesses, for personalization of many things, as well as into the medical field to help improve imaging technologies.

The race to these breakthroughs is an intense one, with the top contributors being China, the U.S., the UK, Canada, and Russia. The competition is ruthless, as the U.S. just prevented Chinese companies from using our technology to advance their own initiatives.

As AI grows bigger, the need for more emphasis on privacy and security, ethical guidelines, and ways to reduce bias remain ever apparent. So, while we have come a long way, we still have a long way to go. Here’s to hoping it is onward and upward.

Artificial Intelligence: A Job Killer or Job Creator?

Last Monday the Trump Administration hindered China’s research on AI by banning 6 Chinese companies working with AI from doing any business with U.S. AI firms.

Despite this, a large Chinese telecom company called Huawei announced its prediction that soon 81% of business processes will be automated by AI. If this is true, many jobs could be lost.

Wells Fargo claims that they will cut over 200,000 jobs in the next decade by switching over to AI technologies. Some of the banking industry has already started to add AI technologies into they processes, like chatbots that aim to help customers with issues they are having. These bots seem so human many people don’t know they aren’t in fact messaging another person.

On the flip-side, some think that AI will lead to job growth. With this rise in tech, someone has to implement these new technologies and program them. This could lead to a rise in jobs in programming, high-performance computing (HPC) administrators and data sciences.

AI is still imperfect and in desperate need of ethical guidelines so that we get the most benefit out of it as possible. Will the future of business in America be run by robots, or will we see major economic growth with job creation we can all benefit from? Only time will tell.

Is AI the Breakthrough we Need in the Medical Field?

Last Tuesday, a study was published by researchers affiliated with ClinSpec Diagnostics Limited that found that by using AI, they were able to detect the presence of brain cancer using just a blood test.

What’s more, with the help of AI, they can tell the difference between control and cancer patients with a 92.8%-93.2% accuracy.

AI has many applications in the medical field, but it’s biggest potential lies within being able to help prevent diseases and do so on a personal level because as time goes on we see how much these diseases can vary from person to person.

But this technology won’t just be able to be used by cancer patients and those at risk, wearable AI will allow us to compile health data about ourselves, and then the algorithms will help us make sense of it so that our health is more in our control… or is it?

When we give up information about ourselves to something that will store it, especially when the internet is involved, we always take a risk that this information will be used against us. As with many applications of AI, it requires us to relinquish some control and privacy to it in order for it to help us, but can those creating it be trusted with such sensitive data?

Without these data inputs, AI cannot function. This creates many issues, one big one being that it is only as good as the data it gets. It may be accurate in predicting things related to my health, a 5’3″, 21-year-old, white female, but if my data was used to help make assumptions about the health data for someone of another race or sex, it would be biased.

AI has many great applications in healthcare already, but there are still many gaps to fill for it to replace current options.

Artificial Intelligence is now a Part of Every Field

AI can be used for a variety of things, like making our businesses more efficient, helping us search the web with just our voice, guiding self-driving cars, suggesting new shows on Netflix for you, improving many aspects of our healthcare processes and even writing stories for journalists.

With each of these applications, there are pros and cons. AI can make businesses more efficient, but may take away jobs. And where journalism is concerned, will is squash or feed the fake news epidemic?

AI is such a rich and complex topic, it exists in every fiber of our world now. So what’s the truth? Is this the rise or fall of mankind?

Some 2020 Campaign Websites Lack Appropriate Safety Measures Despite Online Privacy Concerns

The Internet Society’s Online Trust Alliance released information this passed Tuesday that 70% of the 23 presidential candidates failed to pass the Presidential Campaign Audit. This audit set standards for consumer protection, data security and responsible privacy practices.

Candidates like President Trump and Bernie Sanders passed, among 5 others, but just barely. For example, some sites promise to share your info with only sources that’s also have high privacy standards, whereas Trump’s campaign website does not promise this. This means that even if they are using your data appropriately, those they share it with may not.

Another important piece of security some campaign websites don’t promise is that they won’t collect data from anyone under the age of 13. With privacy issues involving children being a hot topic right now, you’d think these candidates that the public expects to protect them online would be doing more to prove they are worthy of that responsibility.

To make matters worse, 74% of campaign websites failed the audit in 2016.

For our ever changing technology focused world, our potential and current leaders should be doing to more to step up their game and protect our online information. When considering your choice for our next president, remember to look into what they’re doing to protect you online because these issues will only grow from here.

Concerns With Russian Interference in our Cyber World are Still Evident

Yesterday, new research into Russian Malware revealed a better understanding of how Russia’s malware teams work. Much like the U.S., Russia has multiple teams that do not work together. This prevents one operation from giving away another one because these teams do not share their code. 

The teams are Potao Express, BlackEnergy, and Turla, which have infiltrated European government and military computers, as well as targeted groups across Ukraine, Russia, Georgia, and Belarus to harvest information. 

But in 2016, Russia seemed to have had more malicious intent in Ukraine when two days before Christmas their national grid operator was infected with Russian malware. This caused a wide-scale blackout, but it only lasted a single hour. This brought up the question of why Russia would plant this malware just to cause an hour-long blackout.

Newer information shows that their goal was for the malware to destroy systems that would have resulted in a power outage for weeks or months. Furthermore, they planned to damage important transmission systems when Ukraine tried to turn the power back on again.  

This type of cyber-warfare could start a dangerous precedent in other nations.

This may not seem like a big deal for us here in the U.S., but an interesting story I found might suggest otherwise. A new round of ransomware attacks has recently affected 10 hospitals across the U.S. and Australia and caused their systems to be paralyzed.

Other cities in Florida and Texas also recently experienced ransomware attacks, some of them giving into ransom demands, and LabCorp and Hancock Health were hit as well. 

With what Russia is capable of and how often I feel like I see talk about them in the news, I was surprised that a Google Trends search showed that search volume for Russian Hacking has been on a steady decline since the 2016 presidential election. Note, the spike in search volume on July 15, 2018 was caused by Russia being part of The World Cup finals, but search volume overall has gone down so much, I had to remove the time of the election from my date range to see a true trend, as it dwarfs the rest of the data.

There’s no way to confirm who is planting the malware across the U.S. right now, but with another huge election coming up, I think we’d be naïve to think that these attacks aren’t possibly of Russian origin, honing in their skills for more widespread destruction this round. Especially, with recent news stories of Putin saying jokingly that Russia will be trying to affect the 2020 presidential election. While this could just be a poorly timed remark, it surely doesn’t feel like it, and it seems some Twitter users agree.

Even though we can’t be sure right now if these occurrences are all connected, what we can be sure of is that Russia is a dangerous leader in the malware world, and cyber-warfare is very dangerous up-and-coming threat.

Are Quantum Computers the Up-and-Coming Threat to Our Cybersecurity?

On September 20, Google claimed that they have made it to quantum supremacy, which would show that quantum computers can indeed solve more complex mathematical problems than a traditional computer. The paper reporting these findings was uploaded to NASA.gov and then removed

So how does this effect our cybersecurity? Traditional computers could never be capable of solving the complex math problems known as encryption or cryptography that protect our information. This makes hacking them nearly impossible, but a quantum computer could solve these equations in a relatively short period of time. 

Quantum computers differ from traditional computers that run off of a binary system that records data in a series of 1’s and 0’s. These super computers can categorize data as 1’s, 0’s, or as a quantum superstition of these two. 

The system itself wants to be in a single state, not working within multiple at once. For this reason, it will always want to condense to a single state. This is called quantum decoherence.

In order to get the system to maintain working on multiple problems at once, you must employ an understanding on nanotechnology, quantum electronics, superconductors, and other extremely complex subject areas.

Quantum computing could be used to show complex chemical reactions that could aid with advances in chemistry, show high-level financial models, help us predict weather and climate changes more accurately, run AI programs with greater complexity, solve advanced problems in physics, break current cryptographic algorithms, as well as introduce the idea of cryptosystems.

So does this mean that our information is at immediate risk? The answer seems to be no.

For starters, the U.S.’s Institute for Science and Technology has been working on algorithms that can be used for security purposes that are resistant to quantum computers for years and they have five so far that may be feasible.

This finding has also sparked discussion on Twitter, and users are weighing in on this issue.

The general consensus from the cybersecurity community seems to be that although this is a major step in the right direction for this technology, it is still a very small one with a long way to go before we see any real potential for security-related problems or even any benefits from this technology overall.

What Happens When the Government and Tech Companies Collide

On September 13, a House panel that is orchestrating a broad antitrust investigation over technology businesses demanded that companies like Facebook, Apple, Alphabet, and Amazon hand over a long list of records showing their business practices. This was in response to a bipartisan investigation that was launched to look into whether or not these large companies are hindering competition in the technological landscape via their business practices.

This isn’t the first time that government and tech have collided. Where private information and business is concerned, we have seen many meetings-of-the-minds over time, with plenty more to come I’m sure.

On Tuesday, February 16, 2016, Apple received a writ from a U.S. magistrate ordering the company to create a special software that could allow the iPhone of Syed Farook, a suspect in the case of the San Bernardino shooting that saw the deaths of 14 people, to be unlocked by the FBI. This software would allow for an unlimited number of password attempts so that the FBI could gain access to this iPhone without damaging the data it contained. Apple’s chief executive officer, Tim Cook, denied this request stating that once the software was used, it could be hacked, leaked, or stolen, and even just creating it would put millions of Apple users’ data at risk. Apple fought this ruling and won, stating that after iOS 8, not even the company itself could bypass a user’s passcode to gain access to their information.

On August 17, 2018, the U.S. government tried to make Facebook break their encryption on their Messenger app so that the government could listen in on voice calls of a suspect in a gang investigation in Fresno, California. In 2006, an appeals court ruled that phone companies were required to allow police eavesdropping and extended the ruling to some large providers of Voice over Internet Protocol as well, like cable and other broadband carriers. Facebook denied the government’s request, and a judge ruled in their favor, but records with the reasoning are still under seal.

On December 11, 2018, Google’s CEO, Sundar Pichai, testified in front of Congress after the company had come under fire for being interested in making a censored search engine for China, recent security breaches, Google’s bulk data collection practices, and other topics including antitrust and competition. Instead of addressing many of these hard-hitting issues, Congress grilled Pichai on matters pertaining to rumors that Google filters out conservative viewpoints and showcasing criticism of conservative policies, which the company has continuously denied. Twitter users offered other ideas for concerns that Congress should have addressed as well.

On Wednesday, September 4, Facebook, Google, Microsoft, and Twitter met with U.S. officials to discuss steps that all are taking independently and collectively to ensure that the 2020 presidential election is not tampered with after Russia interfered with the 2016 election by spreading disinformation on social media sites, including Facebook and Twitter. Google specifically is  putting money into systems that will detect attempts of phishing and hacking, identify foreign interference on Google platforms, and prevent digital attacks on candidates’ campaigns, but some believe that by including these voices in the conversation, we will end up with another botched election.

While this post could likely go on for hundreds of paragraphs pouring into times when the government and tech leaders worked with or against each other, these are some of the more recent and important instances. With the privacy of users and citizens constantly under scrutiny by both sides, I expect to see many more collisions of law and tech in our future.