Category: General Cyber Security

The importance of preparation

I started wondering how people can prepare themselves for such times, given the recent events. Military clashes are happening in the real world and the cyber one in modern times. The are many parallels between defending assets in both of these worlds. In this article, I shall try listing the different approaches one could use to harden their defenses. At the same time, I shall try giving a clear picture of the target goals of the defenders. 

So what is the ultimate goal of every defender? By default, it is to make the cost of the attack too high, and this way to diminish the gains of that attack. This kind of narrative is often seen in many books focused on the defensive side of cybersecurity. It is important to note that sometimes, people attack other people for personal reasons or even because of emotion. In these cases, attackers usually do not care how much it will cost them to perform the attack. As a defender, we should consider these reasons during the design phase of our defense.

You can see a sample architecture of an off-grid data center on the diagram. Such data centers have much better resilience during any events

There is one exciting proverb regarding the importance of preparation – more sweat in training, less blood in the fight. If we transfer this to the realm of cyber security – the more efforts we put into preparing the infrastructure, the less likely it is to be penetrated. So how we can prepare ourselves for an attack:

  • Buy quality equipment: Your equipment shouldn’t be the most expensive or cheapest. You need gear that can do the job and have a lifespan of at least five years. It is a good idea to buy multiple pieces, so you have hot swaps in case of failure. Items in the middle price range usually are good candidates. 
  • Plan and train: There is little sense in having great gear without using it. Regular training sharpens the skills and decreases the reaction time during the use of the equipment. At the same time, testing the items help check their limits and allows the designer to prepare a better defense. In the realm of cybersecurity, we could do regular red/blue team games where the red team will try to penetrate the infrastructure, and the blue team will defend it.
  • Be realistic: If your attacker has much more resources (money and time) than you, they will penetrate you. There is no great sense in making sure your electronic infrastructure survives an EMP wave coming after a detonation of a nuclear warhead. At the same time, it makes excellent sense to make sure your data is backed up into a protected vault and that you have replacement units if such an event happens.
  • Hack and Slash: Don’t be afraid to modify your equipment if it does not suit your needs. Many security units prefer buying cheaper equipment and rigging it for double or triple purposes. Play around with your gear, and don’t be afraid of breaking it. Sometimes you can find real gems by doing that.

In conclusion, preparation for any defense activity comes with a lot of research. The primary goal of every defender is to increase the cost of attack. The higher the price is, the less motivated the attacker will be. Often the resources of both sides are asymmetric, and thus, some defenders must think such as guerilla fighters or even as Start-Up owners. They have to squeeze the last piece of efficiency provided by their infrastructure.

Should countries hire hacker-privateers to engage other coutries in cyber wars?

Unfortunately, during the last two years, we saw quite a rise in the number of cybercrimes worldwide. Many attacks allegedly came from nation-state actors, and we observed much blame in the public media space supporting this statement. Life is indeed a challenge, and the strongest ones almost always win. Still, there is a subdue difference between being aggressive and attacking foreign countries and defending your interests and infrastructure. 

As a matter of fact, we could categorize the last couple of years as a series of standalone cyber battles, which could finally end in a fully-fledged cyberwar. And in such situations, some people start fantasizing about hiring hackers-privateers and starting a Cyber World War, where teams of the best hackers will fight each other. It sounds like an incredible plot for a sci-fi novel, but there are reasons why such actions could lead to disaster in reality:

On the diagram, you can see the standard military uses of electrical and communication equipment. Cyberwarfare privateers can use their skills to attack many targets without even going near the real battlefield
  • Global World: We live in a global village. The world is no more disconnected, and one crisis can quickly affect it. Check the COVID-19 situation, for instance. Despite its allegedly natural origin, it blocked the global economy and opened many old wounds. Now, believe me, if a worldwide cyberwar happens, we shall have much more complex problems, which could easily lead to conventional or even a nuclear, large-scale war.
  • Ethical Reasons: An old proverb states that one is to be able, another is to have the will, and the entirely different thing is to do it. Ethical hackers could start a fully-fledged cyberwar suitable for their businesses. However, I believe that cybersecurity must be more oriented to stopping criminals rather than achieving political agenda or starting conventional or nuclear wars. 
  • Willingness: Most white hat cybersecurity specialists will not act of aggression for any sum of money. As patriots, they care for the well-being of their country; however, one thing is being a patriot, another is doing destructive actions versus another country or organization. At the same time, most hackers are criminals. Working for state actors will reveal their personalities and end them in jail. These statements reduce the number of individuals willing to work, such as hacker-privateers, to a tiny number.

In conclusion, cybersecurity and hacking are not similar to conventional armies. Sure, we can use the same terminology and ever do “war” games. But essentially, the whole sector is more identical to the standard private security companies, which defend infrastructure perimeters and fight crime. The role of pentesting companies is to test these defenses acting like criminals. Everything other than that should be categorized as cyber warfare and be forbidden. 

The good, the bad and the ugly of Open Source software model

I want to start this post with the statement that I am a fierce supporter of Open Source, and all of my computers, servers, and smartphones are using different flavors of Linux. For the last ten years, I have used Windows ten times at most, all of this because some software vendors have been neglecting the Linux ecosystem for years. Other than that, I have no wish or necessity to touch Mac or Windows for anything rather than testing web or mobile apps. 

At the same time, I want to strongly emphasize that Open Source as a model has its problems and that I believe no software development practice, Open Source or proprietary, is ideal. This post aims to list some of the advantages and disadvantages the Open Source model has. Despite its widely successful spell during the last 30 or more years, the model is somehow economically broken. But, let’s start with the lists:

The good

  • Open Source is almost free: Most open source projects provide free plans for casual users or tech-savvy customers by having an ecosystem. This way, a whole set of companies can build their business model based on these freemium plans and add value.
  • More openness: People working on open source projects must make an ecosystem. And people stay in any ecosystem only if the system is open to proposals and changes according to members’ needs. In another case, the ecosystem usually does not survive for long. Additionally, everyone can review the code and search for security holes.
  • Better collaboration: Legally speaking, if two organizations want to work together, they should sign a contract on every point they want to collaborate. Organizations already know how to work with the various Open Source licenses and do not need to reinvent the wheel for their specific case.

The bad

  • Lack of responsibility: Most Open Source software comes without any obligations for the authors. Whether there are security holes, bugs, or losses by using the software – authors are not responsible.
  • Too much decentralization: When a project becomes too popular, the lack of centralization increases politics and power struggles. By having multiple controlling bodies or boards of people governing the project, the number of interested parties increases and thus sometimes making the decision-making nightmare.
  • Lack of support: Some Open Source projects entirely lack technical or user support. Even if they offer support, the customer must pay too much money to get any meaningful help. The plans with the lower cost usually are not helpful enough.
On the diagram, you can see a standard crawler architecture diagram. Most of the products implementing this diagram would use Open Source components to speed up the development and lower the cost. They must live with the problems derived from using these components

The ugly

  • Sometimes less secure: Many projects do not have the proper set of resources to ensure their level of cybersecurity, despite being used by many people. A recent example of that is log4j – all major Java products use it, and at the same time, a big security hole was discovered a couple of weeks ago.
  • Complicated business model: Open Source is complex for monetization. Many products try surviving on donations or support. However, this monetization model does not scale as much as the proprietary one.
  • Legal mess: Usually, proprietary products step on Open Source ones to speed up the development time. This technique is used primarily in Start-Ups or consulting companies. However, this approach has its problems. What happens in a similar case such as log4j, where a security hole or a bug in one of your Open Source components leads to data leaks or financial losses? Who is responsible? By default, this is the user of the component, aka you.

In conclusion, Open Source is not for everyone. It could be more secure or with better support, but only if the code comes from a reputable software vendor. In all other cases, the user is left on its own to handle their security and support. Another question is whether the alternative (using only proprietary software) is better, but I will analyze this in another article.

Why so much data?

New Year is coming, and usually, during this period, people assess what they did during the previous year. As a person with skills and experience in the defensive part of cybersecurity, I am always quite sensitive about sharing information, contracts, and legal documents with anyone, including institutions. During the last year on multiple times, I had to present official documents and explanations of why and how I did something. On one of the occurrences, I had to deliver around 20, again 20 papers to prove my right. Some of the documents did not relate to the right I wanted to execute, but the institution tried to enforce on me their policy. The representatives in the office even told me that I should trust the institution and that this was the first time someone asked for their data retention period, how they will assure that they will destroy the documents after that period and why they need the data at all.

During the last year, all of these experiences triggered the following questions in my mind – Is my data safe in any institution? Will it be in a safer place if I take care of my data, but not an institution? Can an ordinary person achieve a better level of security than an institution? 

The diagram shows a standard SSD storage system architecture used in almost all database systems. Because of its unique way of storing information, the standard secure delete procedures do not erase the data securely. Special tools are needed for this action, and we could only hope that the institution SysOps department is qualified enough to erase the information properly

For all of these questions, the answers are usually – it depends on the level of expertise of the defending side. So it largely depends on the professionals the institution hired. To strengthen my statement, I can list several case studies that showed how attackers could penetrate even institutions and leak data:

  • Bank Hack: During a regular penetration testing exercise, a team of white hats managed to penetrate multiple office branches of a substantial French bank. Only in one of the offices did the employees ask the penetration expert to identify himself and ask the headquarters whether they sent anyone.
  • Government Taxes Authorities Hack: A couple of years ago, a hacker managed to leak multiple gigabytes of data from the Bulgarian Taxes Agency. The security hole had been opened for an extended period, reported numerous times, and no one took action to close it.
  • Universities Hack: At the beginning of 2021, multiple US universities, including members of the Ivy League, were hacked, and the personal information and documents of their students, lecturers, and professors were leaked to the public.

In conclusion, I think we could safely assume that taking care of our data is our right and responsibility. I am happy to delegate this responsibility only to legal professionals (lawyers, notaries, and judges). They work with confidential documents every day and know how a data leak can affect people. In any other case, sharing data with 3rd parties must come with at least a declaration for their data retention practices and how they destroy the data (there are security practices for doing that correctly). 

Attack of the cables

In last week’s article, I spent some time discussing the disadvantages of penetration testing. The main limiting factor for every red team is the client’s engagement policy. Usually, it is not comparable to a real-life attack. However, at the same time, some of the latest developments in the field are pretty disturbing and could be used by hackers for malicious activities.

One such gadget manufactured by Hak5 looks like an ordinary USB charging/data cable, but it comes equipped with the latest keylogging capabilities. Additionally, the cable supports the following features – Keystroke Injection with DuckyScript™, Keylogging (650,000 key storage), USB-C Smartphone & Tablet Keystroke Injection, Remote Access by WiFi, Customizable Self-Destruct, Multiple storage slots for large payloads, On-Boot payloads, Remote Trigger by WiFi (Geofencing), Long Range WiFi Trigger (2 KM+), Control from any Web Browser and Scriptable WebSocket. In short, that cable is a fully working micro-computer with remote access capabilities for loading payloads and executing them without the victim’s knowledge. As a bonus, it looks exactly like the standard USB to USB-C cable. They either offer versions for Macs.

A creative attacker can think of many uses of these cables. For example, they could ask you to lend them your cable and switch it with the malicious one. They can break into your home/office and swap the cables. They can load the whole supply of a computer shop with these cables and sell you one. The options are almost limitless. With that gadget, you virtually can not trust any cable or flash drive you buy from your hardware equipment supplier, neither your friends nor your family’s equipment.

On the diagram, you can see a sample diagram of how the cable works. It simply cheats the computer using it that it is an ordinary cable. Meanwhile, the hacker sends the payload using Wi-Fi and activates it

We could imagine that the next step for companies such as Hak5 is to embed a fully blown ADB build into the cable and enable remote penetration attacks versus smartphone devices. Such cable will be quite an exciting gadget and could encourage even more attack scenarios.

I have wondered why such equipment is not treated the same way as weapons for a long time. The relative easiness of manufacture and use of such gadgets make them more and more dangerous. Without regulations or even government-based permissions, more and more people will have access to them. What is the guarantee that they will not end in the hands of black hat hackers or criminals? Not to mention that every white or gray hat hacker could potentially go rogue and become a black one. What is the guarantee that such gadgets will not be used for malicious purposes even by licensed professionals?

In conclusion, penetration testing’s land space has become more and more concerning. Without a good set of regulations, we could soon see many people using military-grade hacking gadgets, turning the defensive part of cybersecurity into a terrible nightmare. In any case, many defenders will not be fascinated by the idea of wrapping their USB cables and flash drives with aluminum tape[1] every time they buy new hardware. Sure, it is a cheap way of blocking radio waves, but the aesthetics will not be on a high level.

[1] – https://emfacademy.com/aluminum-foil-emf-radiation/

Red team that

Red team exercises and penetration testing became our new reality. Especially with COVID-19 and the additional boost to digital transformation, we are more and more digitally dependant. Many countries have started legally enforcing business and governmental organizations to ensure better their cyber security defenses. And the best way to do that is to have regular penetration testing drills once or twice per year.

However, we should ask ourselves how effective is this practice and whether it provides a good level of security for our data and assets. To do that, let’s analyze what the usual workflow of doing a penetration test is. Every engagement in cybersecurity starts with a legal contract, which defines the rules of engagement. In this legal contract, the defending side, the client, negotiates the regulations with the attacking side, aka the penetration testing company. In the case of 3rd party requirements such as governmental and corporation integrations, the rules are defined by the 3rd party. They can even give you a list of “trusted” penetration testing companies from which you have to choose. For example, when we had to do a security audit for Google and Microsoft to allow our integrations, they provided the list of auditing partners we had to use.

On the diagram, you can see a standard Red Team drill workflow. The rules are usually not set by the attacking team

And here is one of the main problems with penetration testing – there are rules of engagement. In the Real Word scenario, there is no such thing as a set of rules. The dedicated attacker can do whatever is necessary to penetrate your defenses, and he/she will not abide by the law. Comparison to conventional military drills is not practical but could even be harmful to your team’s attitude. It is much better to compare the attacker modus operandi to what guerilla fighters do, such as asymmetric warfare, without rules, requirements, etc. Attackers will do what is necessary to penetrate you no more or less.

The second problem with penetration tests is that the attacking team usually has limited time to penetrate your organization. The whole economy around red teaming is based on projects between two weeks and two months long. After that, the attacking team must go to the next assignment. In comparison, real-world hacker teams are usually part of criminal syndicates, and these criminal syndicates have other sources of income such as human trafficking, prostitution, drugs, and weapons. They can happily try to hack one organization for a year or more, especially if it is a big enough target.

And last, but not least – different motivation. In the red team case, the motivation is to abide by a requirement or by law. It means that the penetration team will ensure that the system passes the requirements and rarely will do more. When we speak about real hacker groups, usually we have a pretty limited set of motivation types – money and personal vendetta. Both of these are much higher on the motivation scale than a simple requirement fulfillment.

In conclusion, penetration testing is a helpful activity; however, it is not a panacea. The results coming from it gives a piece of information for your organization’s current cybersecurity state. Unfortunately, the limited scope of every penetration test will not give you a 100% guarantee that attackers can not destroy your defenses. It is much better to treat the red teaming as part of our cybersecurity strategy and valuable security tools.

Is Artificial Intelligence the solution for cybersecurity – Part 2?

So, let me go back to cybersecurity. Sorry for the long, boring mathematical-based explanation in the last part, but if one wants to use a given tool, he/she must understand how it works and, more essentially, its limits. And I shall give you my list of concerns why believing in the current cybersecurity hype can be dangerous for you and your organization:

  • We must not compare a Machine Learning model to the human brain: We have no idea how the human brain works, and more especially how ideas creation and generalization work. Additionally, the pure power consumption of a machine learning model is times bigger than a human brain. Sure it is faster but much more expensive. The average power consumption of a typical adult is 100 Watts, and the brain consumes 20% of this, making the brain’s power consumption around 20 W. For comparison, Google’s DeepMind project uses a whole data center to achieve the same result, which a two-year-old kid does with 20 W.
On the diagram, you can see what kind of problems Machine Learning algorithms can solve in the cybersecurity field. All of the activities listed in the last row are some form of categorization used for detection. No prevention is mentioned.
  • Machine learning is weak in generalization: The primary purpose of polynomial generation is to solve the so-named categorization problem. We have a set of objects with characteristics, and we want to put them in different categories. Machine learning is good at that. However, if we add a new category or dramatically change the set of objects, it fails miserably. In comparison, the human brain is excellent in generalization or, in social words – improvisation. If we transfer this to cybersecurity – ML is good in detection, but weak in prevention.
  • Machine Learning offers nothing new in Cybersecurity: For a long time, antivirus and anti-spam software have used rule engines to categorize whether the incoming file or email is malicious or not. Essentially, this method is just a simple categorization, where we mark the incoming data as harmful or not. All of the currently advertised AI-based cybersecurity platforms do that – instead of making the rule engine manually, they use Machine Learning to train their detection abilities. 

In conclusion, cybersecurity Machine Learning models are good in detection but not in prevention. Marketing them as the panacea for all your cybersecurity problems could be harmful for organizations. A much better presentation of these methods is to call them another tool in the cybersecurity suite and use them appropriately. A good cybersecurity awareness course will undoubtedly increase your chances of prevention rather than the current level of Artificial Intelligence systems.

Is Artificial Intelligence the solution for cybersecurity – Part 1?

Lately, we see a trend in cybersecurity solutions advertising themselves as Artificial Intelligence systems, and they claim to detect and prevent your organization from cyber threats. Many people do not understand what stands behind modern Machine Learning methods and problems they can solve. And more importantly, that these methods do not provide a full range of tools to achieve the wet dream of every Machine Learning specialist – aka general Artificial Intelligence or, in other words – a perfect copy of the human brain.

But how Machine Learning works? Essentially, almost every algorithm in Machine Learning uses the following paradigms. First, we have a set of data, which we call training data. We divide this data into input and output data. The input data is the data our Machine Learning model will use to generate an output. We compare this generated output with the output of the training data and decide whether this result is good. During my learnings in Machine Learning, I was amazed how many training materials could not explain how we create these models. Many authors just started giving the readers mathematical formulas and even highly complex explanations comparing the models to the human brain. In this article, I am trying to provide a simple high-level description of how they work.

On the diagram you can see a standard Deep Learning Machine Learning model. The Conv/Relu and SoftMax parts are actually polynomials sending data from one algebraic space to another

So how do these Machine Learning algorithms do their job? A powerful mathematics branch is learning the properties of algebraic structures (in my university, it was called Abstract Algebra). The whole idea behind this branch is to define algebraic vector spaces, study their properties, and define operations over the created algebraic space. But let’s go back to Machine Learning, essentially our training data represents an algebraic space, and our input and output data is a set of vectors in that space. Luckily, some algebraic spaces can perform the standard polynomial operations, and we even can generate a polynomial rendering the input and output data. And voila, a Machine Learning model is, in many cases, a generated polynomial, which by given input data produces an output data similar to what we expect.

The modern Deep Learning approach is using a heavily modified version of this idea. In addition, it tries to train its models using mathematical analysis over the generated polynomial and, more essentially, its first derivative. And this method is not new. The only reason it has raised its usage lately is that NVidia managed to expose its GPU API to the host system via CUDA and make matrices calculation way faster than on the standard CPUs. And yeah, the definition of a matrix is a set of vectors. Surprisingly, the list of operations supported by a modern GPU is the same set used in Abstract Algebra.

In the next part, we shall discuss how these methods are used in Cybersecurity.

Is vaccination certification the way to go?

We are almost two years into the COVID-19 world, and we saw a good number of ways to control the pandemic. We now have vaccines, which will hopefully become better and better with time, and finally, the pandemic will be over. With the bright light in the tunnel, there are some disadvantages to our privacy. Many governments decided to issue digital vaccination certificates and grant access to part of the locked-down social services such as cinemas, bars, hotels, concerts, etc. However, we need to understand that such a solution comes with its burden, especially if it is not appropriately designed.

But what are the different methods of actually issuing a digital certificate for any data? We need a CA (certification authority) to sign somehow our data. In the paper world, this happens using the signature and the stamp of a notary. In the digital world, the certificate is signed by a computer machine using modern cryptography methods. There are different mediums for this digitally signed certificate, and I shall cover them in a shortlist:

On the diagram, you can see a standard NFC solution technical diagram. The reader is sending energy and data using electric magnetic fields. The NFC data storage is passive and usually does not have a battery.
  • A printed certificate with QR code: For many years, the aviation industry has used QR codes for authentication purposes and a faster onboarding experience. The QR code contains a signed data read by the boarding gate, and if adequately verified, the gate allows the passenger to pass through. This method gives good privacy from a privacy point of view, but you will need to keep the paper with you constantly. And this is especially true in the case of a vaccination certificate. Additionally, everyone can read the QR code.
  • A digital record based on your data: Almost every person on the Earth has a personal identification number issued by his/her country of origin. The government could use this data to base the vaccination certificate on it and record your number of shots into an online server. However, this is the most terrible method in terms of privacy, because usually vaccination plan is personal data and must have a proper authentication mechanism defending it.
  • NFC-based certificate: Modern digital ID cards use this technology to keep a signed copy of your data. This way, everyone with an NFC reader can read the data from your card and verify it using the stored digital x509 certificate. As opposed to the paper solution, the NFC one is reprogrammable, which means we could reuse the same card/chip to update the data with more medical information, and everything stays locally in the card. This option is the best in terms of privacy. However, you will need an NFC reader-protected purse or backpack to keep the data safe.

In conclusion, digital vaccination certificates can help governments control the pandemic. However, there are many privacy issues in the long term, which could affect the general population. For example, what happens if hackers manage to collect data for everyone, whether vaccinated or not, and create illegal lists with people, which employers can later use to decide whether to hire or not a given candidate. There are already cases with illegal chronic diseases-based lists distributed on the black market. We could easily see a similar future for our vaccination passports data.