Sorry for the long delay between posts, but last couple of months where quite hectic.
Thank you and stay tuned for more!
Sorry for the long delay between posts, but last couple of months where quite hectic.
Thank you and stay tuned for more!
I started wondering how people can prepare themselves for such times, given the recent events. Military clashes are happening in the real world and the cyber one in modern times. The are many parallels between defending assets in both of these worlds. In this article, I shall try listing the different approaches one could use to harden their defenses. At the same time, I shall try giving a clear picture of the target goals of the defenders.
So what is the ultimate goal of every defender? By default, it is to make the cost of the attack too high, and this way to diminish the gains of that attack. This kind of narrative is often seen in many books focused on the defensive side of cybersecurity. It is important to note that sometimes, people attack other people for personal reasons or even because of emotion. In these cases, attackers usually do not care how much it will cost them to perform the attack. As a defender, we should consider these reasons during the design phase of our defense.
There is one exciting proverb regarding the importance of preparation – more sweat in training, less blood in the fight. If we transfer this to the realm of cyber security – the more efforts we put into preparing the infrastructure, the less likely it is to be penetrated. So how we can prepare ourselves for an attack:
In conclusion, preparation for any defense activity comes with a lot of research. The primary goal of every defender is to increase the cost of attack. The higher the price is, the less motivated the attacker will be. Often the resources of both sides are asymmetric, and thus, some defenders must think such as guerilla fighters or even as Start-Up owners. They have to squeeze the last piece of efficiency provided by their infrastructure.
These days, many articles explain how people must have a side hustle and create multiple income sources. Unfortunately, one significant disadvantage of this media is that it does not successfully demonstrate how to build such a side hustle. It does not list the disadvantages and sacrifices coming with having such. And finally, it does not explain how to create a team working on that side hustle. This part will address these challenges and describe how our hypothetical team can resolve them.
But let’s start with listing the main disadvantages of having a side hustle:
After listing all the most critical disadvantages, let’s see how our hypothetical team can overcome them. The mandatory requirement for a moderately successful side hustle is to have at least ten hours per team member per week. In real life, if one of the team members can not dedicate such an amount of time per week, the setup becomes fragile, and there is a high chance of disaster. Having this prerequisite fulfilled, our fictional group could use the following mechanisms to improve their efficiency:
We shall have three back-end devs, one front-end/designer, and one dev-ops member in our hypothetical team. They will dedicate ten hours per week to the project, and one of the back-ends will be the squad leader. Every two weeks, they will have a two-hour call to discuss the current status and decide what they will do next. Additionally, they will use Slack, Github, and GSuite for synchronization.
In conclusion, I would emphasize taking the upper statements with a grain of salt. They helped me in my previous experiences. I have used side hustles from an early age to keep myself in shape and learn new skills. For example, I took my bachelor’s and master’s degrees and worked full time simultaneously. However, such dynamics will take their toll on most people quickly and could even lead to burnout or diseases. Given that, I would advise you to choose your teammates carefully – not everyone is “crazy” enough to live such a lifestyle.
In the following weeks, I shall write a couple of blog articles focused on how you can build a fictionary technical product on an extremely tight budget. The product will be defined as a side hustle, which idea will be to restructure, rewrite, and put into a modern shape an old project. We shall try minimizing the amount of time and money spent on the product because side hustles do not pay bills most of the time. At the same time, the approach will show how little is needed for a technical team to create a product and release it.
But what will be the idea – A simple tool that improves the way users plan their work. There are tons of such solutions on the market, and big companies have been developing something like that since the 90s of the last century. Keeping in mind that – we would like our fictionary product team to use new work approaches, check whether they can form a highly effective team, have some fun and focus their attention on something constructive. Of course, in reality, there will be no chances of scaling such a project. In addition, side hustle teams lose their energy and motivation to work long-term. In real life, people shift priorities – they can start working at a big corporation. Some had to focus on their income sources. Others got kids.
Such a mental example could be beneficial for every technical team despite these facts. In our fictional situation, the team will manage to make an initial version of the tool; make a website; produce a video; write a couple of technical whitepapers; create “branding” elements, and improve their skills during the period.
In the following parts of this series, I shall explain and discuss how this team will manage to achieve all of this in their “free” time and how much it will cost them in terms of money. Every part will be focused on one of the following items – branding elements, website, video, technical whitepapers, and finally, team structure and way of work. Hopefully, this will help you build your product and structure your team using the same tools and approaches.
In an economy, the standard categorization of assets is divisible and non-divisible. We could categorize all FIAT currencies, gold, land, etc., into divisible assets – everything we could divide into smaller chunks. On the other hand, a non-divisible asset is an asset that we can not divide legally – for example; we can not cut a piece from a painting and sell only it. The same is true for apartments, buildings, collectibles, etc. A unique number or id usually identifies both types of assets, but assets of non-divisible kinds sometimes could only have one copy.
We can see many parallels if we return to the crypto world and translate between the previous paragraph and the different tokens offered by the various crypto exchanges. In crypto, we call all divisible tokens “fungible”. Examples of such tokens are bitcoins, etheriums, and any other cryptocurrency. To verify transactions over the set of these tokens, we use the nature of how blockchain networks work. Every transaction is cryptographically signed, and one transaction keeps the metadata for the tokens transfer between two or more wallets. Usually, in this metadata, we store the unique id of the divisible token (when we split the token, we typically generate a new id/number for every part of the split).
The programming logic used to implement the described set of features is called a smart contract, and it could be described as a daemon program (for people who are not aware of the terminology, this is a service program running in the background), which operations could be stored into the ledger storage and are cryptographically signed. So essentially, when we transfer tokens, we call this program and its API.
Let’s return to NFTs now. Essentially, NFT means non-fungible token and is a non-divisible asset by its logic. Every NFT has a unique ID similar to the standard tokens and could be transferred between owners. There is a slight difference that we can not divide them, and currently, the protocol does not support multiple owners of the same NFT. Additionally, unlike standard tokens, NFTs could be emitted only by manual intervention but not auto-generated by the protocol itself as rewards.
A more profound analysis of the described behavior could give us the insights that NFS was designed to replace the standard legal contract by enabling the parties to upload their deal’s metadata into the blockchain. And thus to probably avoid the use of notary or at least to digitalize their work.
But how does this transfer to digital arts and collectibles? Usually, digital art is a digital file in some format (most of the time, we speak about images, but this could be a whole game model into a video game, for example). And copying a digital file is one of the basic operations we are taught when using computers. And here comes the help from cryptography – we could easily calculate a hash of the file, generate a random id for it, and sign them from the issuer name. This way, an artist could upload their file multiple times and offer a unique NFT for every file copy.
In conclusion – NFTs’ way of working is quite promising. With some will coming from governments around the World, it could easily automize and speed up the performance of different legal frameworks. Additionally, it would increase the visibility and clarity of how they work. At the same time – unfortunately, the way use it, aka selling pictures of cats and game models, is a little bit speculative. Unfortunately, it inherits some of the disadvantages of traditional fungible tokens, especially the problem with the emission of new assets into the network.
Unfortunately, during the last two years, we saw quite a rise in the number of cybercrimes worldwide. Many attacks allegedly came from nation-state actors, and we observed much blame in the public media space supporting this statement. Life is indeed a challenge, and the strongest ones almost always win. Still, there is a subdue difference between being aggressive and attacking foreign countries and defending your interests and infrastructure.
As a matter of fact, we could categorize the last couple of years as a series of standalone cyber battles, which could finally end in a fully-fledged cyberwar. And in such situations, some people start fantasizing about hiring hackers-privateers and starting a Cyber World War, where teams of the best hackers will fight each other. It sounds like an incredible plot for a sci-fi novel, but there are reasons why such actions could lead to disaster in reality:
In conclusion, cybersecurity and hacking are not similar to conventional armies. Sure, we can use the same terminology and ever do “war” games. But essentially, the whole sector is more identical to the standard private security companies, which defend infrastructure perimeters and fight crime. The role of pentesting companies is to test these defenses acting like criminals. Everything other than that should be categorized as cyber warfare and be forbidden.
The last article discussed the advantages and disadvantages of the Open Source software model. We even listed some uncomfortable truths regarding its economic viability and how it could be more expensive than many proprietary products. Despite being an Open Source zealot, I want to start with the statement that I still think proprietary software is sometimes better than Open Source ones. We can not compare my case with the average customer because I have spent the last 18 years working in IT – aka I want much more control over my system than the standard PC user. At the same time, when I can, I strongly avoid using proprietary software because I want to know what runs on my device or have the ability to review it if I wish. But let me list the good, the bad, and the ugly of using proprietary software:
In conclusion, there is no significant difference between proprietary and Open Source software models. The only meaningful difference is that customers could legally claim stuff easily from smaller proprietary vendors. However, once the vendor becomes too big, they hire better lawyers, and experienced lawyers are pretty good at defending corporate interests. Other than that, the tradeoff for the end customer is first-level support versus free usage.
I want to start this post with the statement that I am a fierce supporter of Open Source, and all of my computers, servers, and smartphones are using different flavors of Linux. For the last ten years, I have used Windows ten times at most, all of this because some software vendors have been neglecting the Linux ecosystem for years. Other than that, I have no wish or necessity to touch Mac or Windows for anything rather than testing web or mobile apps.
At the same time, I want to strongly emphasize that Open Source as a model has its problems and that I believe no software development practice, Open Source or proprietary, is ideal. This post aims to list some of the advantages and disadvantages the Open Source model has. Despite its widely successful spell during the last 30 or more years, the model is somehow economically broken. But, let’s start with the lists:
In conclusion, Open Source is not for everyone. It could be more secure or with better support, but only if the code comes from a reputable software vendor. In all other cases, the user is left on its own to handle their security and support. Another question is whether the alternative (using only proprietary software) is better, but I will analyze this in another article.
With over 18 years of experience working in the Information Technology and Computer Science field, I have wondered how information can affect us as human beings and our brains. During the years, I invested a reasonable amount of time reading different brain models and how they interact with information. Unfortunately, no model measures the level of stress and distress information could put on our bodies. This article aims first to give a simple explanation of what the Internet is and how it could be connected to human brains and second to provide a sample formula of how information coming from different sources could affect our health.
This work uses some terminology coming from the works written by the following authors – Norbert Wiener, Freeman Dyson, David Bohm, F. David Peat, Peter Senge, John Polkinghorne, Edmund Bourne, Marcello Vitali Rosati, and Fredmund Malik. The ideas from these works helped me prepare this article and clear out my understanding of how the brain and information are supposed to work. I would suggest every specialist in Artificial Intelligence and Machine Learning read their works to better understand our reality and how it is supposed to function.
But let’s start with a couple of physics-based definitions:
In other words, we could use a dynamical system to describe the continuum, including all human brains and the Internet. At the same time, we could define the human brain and any computer-based device as a dynamical system (actually, a neural network is a type of dynamical system). And we could use entropy to “send and receive” information/quantum energy from the continuum to the human brain or a computer-based device.
Two additional definitions will help us to finish drawing the picture:
After we have described all the needed definitions, let’s draw the whole picture using them as building blocks. We have the continuum and number of dynamical systems attached to it. Every system can receive and put energy/information into the continuum. Some systems are stable and only put energy/information in the continuum when other systems put some amount in them. Other systems are constantly in motion and emit and receive energy/information without breaks.
Many researchers categorize our brains and computers, such as reality engines – aka interpreters of quantum energy coming through them. However, these reality engines must be treated as emitters because human brains and computers emit energy via video, audio, motions, temperature, etc. Systems without additional energy are newspapers, books, articles, computer hard drives, flash drives, etc. could be called reality reflectors because they need a boost of energy to emit anything. In short, that way, we could connect the continuum to the virtual world “virtualized” by the Internet. But, let’s try to define some reality engines types:
On the diagram you can see a sample dynamical system representing the energy transfer happening in the continuum. Most transfers happen thanks to a sensor activity. Whether there are other means of energy transfers rather than these using sensors, we do not know yet.
There is an interesting aspect of how the information/energy travels through the continuum. To reach from one reality engine to another, it naturally has stable paths. We could furthermore call these paths reality bridges and define them the following way:
After finishing the architecture presentation of our collection of dynamical systems connected to the continuum, let’s formulate two definitions used in the Information Technology field, which we could transfer to our collection of systems:
After having all of these definitions and rules, let’s analyze in this current setup how information/energy could affect human health. We already perceived that the human brain may be working as a reality engine, and it looks like it is a dynamical system. At the same time, we can put and remove information/energy from this dynamical system. And every dynamical system has its capacity of states. Two questions arise – what happens if we keep pouring information/energy into the system but do not remove any from it, and could we expect to have some filter where the information/energy intensity could be decreased.
On the first question, in case of computer configuration, the computer will malfunction. In the case of the human brain, the short answer is – we don’t know. Based on the different theories I read, I could assume that pouring too much information into our brains could lead to psychological problems and psychiatric diseases. Another interesting fact related to our brains is that most psychological problems and psychiatric disorders could not be related to physical brain damage. The condition is entirely on a reality perception level, or we could assume it could be a problem with dynamical system capacity overload.
On the second question, in the case of a computer system, filters are already in place; however, these filters work too low level. On the so-called application level, things become more complex, and the computer needs human help. Regarding the brain, the situation is much more complicated. Based on our life experience, we could expect the intensity of the received energy to be based on how emotionally close to us is the reality engine emitting it. If it is our child and we receive negative news about it, we could expect the information to be with the highest intensity; however, if we receive negative information/energy for an unknown kid on the Internet or the TV, we could expect this to hit us with less power. Based on this observation, we could assume that there is an information/energy filter in our brains. It seems that this filter is based on the social distance (which is partially based on latency) to the reality engine emitting the information/energy.
And finally, let’s combine all the upper statements into a single formula:
Bandwidth is the raw bandwidth of a 1-hour video and audio data chunk, calculated in bytes.
Stress is the level of stress which we could attach to the information/energy transfer. Check the different levels in the table, coming from the beautiful Edmund Bourne’s work on psychological problems and psychiatric disorders. I modified the table slightly to support more common daily events.
Social Distance is the social distance modifier, which we could find in its table. The modifier tells us that if we are experiencing the information/energy transfer from the first-person view, it will hit us the strongest, and if we hear that someone we don’t even know has a problem, then it will hit us ten times less.
According to Bourne’s book, the typical yearly amount of stress for a human being is around 150. Some people could endure higher levels, others less, but the median is around 150. We could calculate the number of information/energy an average human being can survive for the year using that data. After passing this limit, we could expect the person to start feeling the effects of distress. Another interesting assumption is that we could expect the level of stress to be reduced automatically over time. It seems our brains are designed to lose energy/information over a given time period and thus reduce the stress level to some predefined level.
If we play with the formula, we could see the following observations:
Surprisingly the formula looks correct for most real-life social events. There are some edge cases with what happens if you read about your kid’s death in the newspaper. Will this information/energy hit you with less intensity than the video/audio equivalent? For sure, no. Probably, we could add a modifier based on the stress level per reality bridge type. However, the needed work to make the formula work for every edge case is far outside this article’s scope.
In conclusion, I would say that I do not pretend this work to be entirely scientifically correct. There are many scientific holes, which we could not prove adequately. Additionally, this article is my understanding of the listed authors’ works. I am not a physicist, nor a psychologist, and some of the nuances of the mathematical models used in these works could be too complicated for me to understand entirely.
However, for sure, the following questions need answers:
I am convinced that some day, we shall receive answers to these questions, but with our current knowledge, the answer is – we don’t know.
In the last two parts of this series, we discussed our network protocol and the architecture of our body camera system. We shall discuss our backend recording and streamers service in this final part. After that, we shall present the budget we burnt for this MVP, and finally, we shall discuss why it did not work.
There are multiple server video streaming solutions across the market. Unfortunately, most of them require installing new desktop software or plugins. At the same time, we saw that no video storage format is codec agnostic and could support multiple frames using different codecs. All these weaknesses forced us to develop our storage format for our videos. After a reasonable amount of time of thinking about what would be the format we need for this kind of work, we formulated the following requirements:
Fortunately, if we analyze our network protocol, we can see the following characteristics which will fulfill the requirements:
With the information from the previous bullets, we can define the following logic. We append every incoming packet to its corresponding file on the filesystem, similarly to what pcap is doing with the network packets. At the same time, another process is reading the file and building the index in memory of our service. And the service uses this index to restream the recorded network packets through web sockets to the web browser player.
To implement the described logic, we decided to build the following system modules:
One can easily modify the proposed architecture to support cloud deployment and high scalability by replacing the concurrent queues with message brokers and the local filesystem with GlusterFS.
After we finished the technical details of the implementation, let’s discuss how much it cost for us to implement the MVP:
So we managed to implement the technical MVP for a total cost of 36330$. We tried to sell it, and we failed brutally.
Why we failed
As a team without experience in developing hardware products, we made many errors. Fortunately, it was our first and last try at selling a hardware product. We took our lessons, which I shall list:
In conclusion, despite the problems, we managed to produce MVP, which is a piece of fantastic news. Unfortunately, we could never sell this MVP to anyone for the listed reasons. Looking at the good side of things, we learned what mistakes to avoid when penetrating a market. It helped us with our following products.