<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Dave's Blog</title>
  <id>http://worldaccordingtodave.com/</id>
  <subtitle>A short description of the blog.</subtitle>
  <generator uri="https://github.com/madskristensen/Miniblog.Core" version="1.0">Miniblog.Core</generator>
  <updated>2021-05-25T03:59:45Z</updated>
  <entry>
    <id>http://worldaccordingtodave.com/blog/on-line-education-good-or-bad/</id>
    <title>On-Line Education Good or Bad?</title>
    <updated>2021-05-25T03:59:45Z</updated>
    <published>2021-05-25T03:59:45Z</published>
    <link href="http://worldaccordingtodave.com/blog/on-line-education-good-or-bad/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="Professional" />
    <category term="info" />
    <category term="Education" />
    <content type="html">&lt;p&gt;When I was in college we had to attend on campus and I remember those days very well. Not only did I learn a lot from classes, but being on campus is an experience that really influence my life. I did not live on campus but many of my friends did. These are experiences that were well worth the time and trouble to have. I remember one of my professors making the statement that on our first day on the job, we would find that we really did not know our main subject area like we thought we did. As it turned out that professor was sort of a prophet. Not only did I not know as much as I thought I did, but it took no time at all to be asked questions I had no idea how to answer. Was it possible that I had wasted my time and money on a degree? It was at this time I realized that I did not understand what undergraduate really was. The lesson I really needed to have learned was that, although college introduced me to a lot of information, the real lesson was learning to think for myself. Without a single specific class, I had learned to solve problems by thinking for myself.&lt;/p&gt;  &lt;p&gt;About fifteen years later I started teaching at DeVry part time and shortly after that I started my graduate degree. The degree that seemed to match the skills, I saw I needed to developed, was a college that was starting to lead in the area of on-line education. I was not sure what on-line education would be like, from an educational view, but I knew I wanted an on-campus experience because of the previous on campus experiences I had before. So, I did the on-site classes rather than the on-line classes. At this same time compress class programs were starting to become the standard for post-secondary private education. I was seeing this even at DeVry. It was clear that students wanted shorter class times, I agreed with that from a student perspective. It was not until I started a compress class program that I saw the issues from a hiring manager’s view.&lt;/p&gt;  &lt;p&gt;From the start I was the exception for my class. I read every chapter and book for a class and it was clear that this was not the standard for my fellow students. I was also finding out the issues with my students was no different; in general students did not read much, if at all. The real influence to learning came from the classroom discussions. At this point, as a student and a professor, I was finding that the more involvement there was in the classroom, the more overall learning success was achieved. This observation was being seen with the different candidates being looked at to be hired. Not only was I seeing a different competency level from on-line students and on-site students, many of my friends were asking about this difference.&lt;/p&gt;  &lt;p&gt;During the start of on-line education, at DeVry, on-line became synonymous with discussions. This fact was supported when I was given a book called &lt;i&gt;Discussion as a way of Teaching&lt;/i&gt; by Stephen Brookfield and Stephen Preskill. I really did not like some parts of the book, but there were other parts that would help change my concept of classroom management. Later Wired Magazine published an article called &lt;i&gt;A Radical Way of Unleashing a Generation of Geniuses&lt;/i&gt;, October 2013 issue. These two readings challenged the way I approached my classes. What these two resources did was help me to see what an effective educator was. An effective educator was someone who facilitated learning by asking questions, not necessarily giving answers. It was at this point I gave up Power Points for questions on the board. I turned to asking “how” instead of telling “why”. Do not get me wrong, I still had to present answers, but answers were now part of asking questions. The goal became letting the students see there were many ways to solve a problem. Getting students to talk helped open them to other ideas.&lt;/p&gt;  &lt;p&gt;In a classroom situation I found that I could have both open ended and closed ended questions and I could get students to engage in class. As the &lt;a name="_Hlk72779732"&gt;modality &lt;/a&gt;changed to what was called on-live, where I had both on-site and on-line students, I found that discussion questions had to be open ended only. In an on-line environment, I found closed ended questions would shut down discussions. Once an answer was given everyone else would repeat that same answer in different ways. There was no constructive learning to this outcome. Brookfield and Preskill stated in their book “discussions work best when a large number of students participate”. This fact was supported as I moved on-line discussions to open ended questions. Students were more engaged as I was able to keep the discussion moving. Sometimes I would even redirect the discussion to a slightly different topic to help illustrate a point I wanted the students to look at more closely.&lt;/p&gt;  &lt;p&gt;I found that I had to get on-line students to see the on-line discussions as a classroom. Just like a traditional classroom allows for more in depth look at issues, on-line discussions can allow for this in-depth view by having all students talk about their views. I found the more the students started to joining in a conversation, and not meeting a posting requirement, they would take the conversation to more in-depth levels. This in turn would lead to a better educational outcome. Other resources support this outcome; for example, “the value of interaction between students in an e-classroom setup can’t be underestimated” (&lt;a href="https://kpcrossacademy.org/making-good-use-of-online-discussion-boards/"&gt;https://kpcrossacademy.org/making-good-use-of-online-discussion-boards/&lt;/a&gt;). &lt;/p&gt;  &lt;p&gt;As I have talked to friends, and thinking about my own experiences, I have come to believe that there is a concern over the quality of on-line education compared to traditional on-site education in many hiring manager’s minds. Currently these concerns, about on-line education yielding lower qualified candidates, are often well seen from a candidate pool I have seen as well. I also think that these outcomes are a product of not remembering our “Why” we started teaching, as Simon Sinek would put it. Today I still view teaching as being given the honor to help change lives.&lt;/p&gt;  &lt;p&gt;Over the last year, or so, all educational institutions have had to learn how to make the most out of remote learning, or on-line education. This is not a modality I have ever wanted to do, but we have all had to find a way to make it work. What my experiences have shown me during this time, is that I somehow must find a way to bring the classroom engagement to a remote learning situation. The way I have done that is by changing on-line discussions into an on-line classroom. This move has forced me to be more engaged with my class. I have to make sure I get involved with discussions every day and that I make sure the discussions follow more of an open ended question concept. This is hard because many of the discussion questions are not designed to be open ended, so I have to make the discussion more open ended questions by redirecting the discussions. This has not been easy for some students to get use to, but I have received good feed back about the level of learning from students and that tells me that this may be getting the outcome I am expecting to get.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/who-owns-your-data/</id>
    <title>Who Owns Your data?</title>
    <updated>2021-04-02T05:24:12Z</updated>
    <published>2021-04-02T05:24:12Z</published>
    <link href="http://worldaccordingtodave.com/blog/who-owns-your-data/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="Security" />
    <category term="encryption" />
    <category term="data" />
    <content type="html">&lt;p&gt;What a simple question that has such an obvious answer; or does it? After all we write contracts to ensure the data, we sent to third parties, retain our ownership do we not? Is it not true that data can make a competitive edge over the competition? If data is not that important why are we always concern about the loss of this data? At this point it should be clear that we all care about our data and it can be a make-or-break situation if that data is loss or stolen. So, it seems that the answer to the question “who owns our data” can only be us. No one else will be as concern about our data, and its security, more than we will. Do we show the world that we, in fact, really care about our data?&lt;/p&gt;  &lt;p&gt;I know of a case where a business was so concern about the security of their data they required the data to be encrypted before sending it to a third party provider they were contracting with for some services. Of course, there was an NDA in place, but the business still wanted that data encrypted for transport. I am sure that this seems reasonable to just about everyone who reads this. Just from a security perspective it can be seen that confidentiality is very important. In fact, we all know that confidentiality is one of the three basic pillars of security. The other two are integrity and availability. To do the encryption and decryption of the data, GNU Privacy Guard, GPG, was used. GPG uses an asymmetrical encryption process that entails generating a public/private key combination. All of these makes sense and seems to follow standard practices.&lt;/p&gt;  &lt;p&gt;The concern I had was when I found out that the business wanted the third-party vendor to generate the public/private keys and then the business would use the public key to encrypt the data and send to the third-party vendor. This is where I became concern. Here is the risk I see with this situation. If you look from the vendor’s view there is no guarantee the data came from the business. This is because anyone can have the public key. If the business generated the keys and gave the public key to the vendor, the vendor can be guaranteed the data came from the business. This is because the private key acts like a signature from the business. It is very reasonable that the business used the private key to encrypt the data since the public key can decrypt the data. This would also lend to a situation where nonrepudiation would exist. The vendor would know for sure data came only from the business and the business would be the only one to be able to encrypt data to the vendor. &lt;i&gt;Principles of Information Security&lt;/i&gt; by Whitman and Mattord stated it well “nonrepudiation underpins the authentication mechanism collectively known as digital signature”. Using a signature to receive any important package is common in the real world, why should it not be common in the digital world?&lt;/p&gt;  &lt;p&gt;Let’s think of this in a different way. What if the data, the vendor had, ended up creating an issue with the businesses’ reputation? How does the business prove that the issue was from the vendor and not the data? After all, does not the business close the door that would have guarantee knowledge of where the data came? Today there are many businesses being compromised by hacktivist just to make a point. Can we say for sure that hacktivists would not try to ruined a businesses’ reputation if they could? The recent issues with Game Stop and the market shows what a group of people can do when they band together for a common cause. What is keeping this from happing for other vindictive reasons? Today it may seem unreasonable to think that a data exchange could be compromised for this reason, but it was also only a few months ago that it seemed that no one would ever hack a supply chain, yet it happened. Are we willing to risk a business on an assumption that someone would never do that?&lt;/p&gt;  &lt;p&gt;Just to be clear, if we really care about our data and its security, why would we want someone else to oversee its encryption?&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/are-you-an-expert/</id>
    <title>Are You An Expert?</title>
    <updated>2021-02-24T01:24:22Z</updated>
    <published>2021-02-24T01:24:21Z</published>
    <link href="http://worldaccordingtodave.com/blog/are-you-an-expert/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="business" />
    <category term="Professional" />
    <content type="html">&lt;p&gt;Many times, when talking about qualified people, I ask the same question: “What do you call the person who graduates last in their medical class?” ….. Doctor. Is this the doctor you want? Another example I like to talk about is from the movie &lt;i&gt;Armageddon&lt;/i&gt;. In the scene where the NASA scientist is talking to the White House adviser there appears one of the best comments made about qualifications of experts. That comment was:&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;I know the presidents' chief advisor, we were at MIT together. And, at this point in time, you really don't want to take advice from a man who got a C minus in astrophysics.&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;Just to make it clear I am not saying you have to have high grades, or a high education, to be an expert. I know many people with what would appear as lower education outperform someone of higher education. The major characteristic that makes a person an expert is their credibility. For someone to be credible it does not require high grades, high education, or even certifications. Credibility is earned when someone shows a high level of knowledge on a particular subject. Grades, education, and certifications can enhance credibility, but they cannot define credibility.&lt;/p&gt;  &lt;p&gt;People I see as experts, all tend to have a common characteristic. That characteristic is the drive to continue to learn a subject no matter how long they have been working in that subject area. These are the people who love a particular subject and want to know all they can about that subject. These are the type of people that add real value to a business.&lt;/p&gt;  &lt;p&gt;This definition is well supported by many authors, but one of my favorite articles is &lt;i&gt;The Making of an Expert&lt;/i&gt;: &lt;a href="https://hbr.org/2007/07/the-making-of-an-expert"&gt;https://hbr.org/2007/07/the-making-of-an-expert&lt;/a&gt;. In this article is states that there is no correlation between IQ and expert performance which I have seen in life. My original background is in music so I have always understood what practice does to your skill set. I have always stated, to my students, that if you want to do better practice. I have built many networks, written many different types of programs, tried different types of attacks; all at home on systems, or VMs, I have setup to see how things work. I have lived by the thought that knowledge comes from learning from your mistakes, but wisdom comes from learning from others mistakes. So, I also read a lot to see what others have learned.&lt;/p&gt;  &lt;p&gt;Going back to &lt;i&gt;The Making of an Expert&lt;/i&gt;, there is another very true statement:&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;The journey to truly superior performance is neither for the faint of heart nor for the impatient. The development of genuine expertise requires struggle, sacrifice, and honest, often painful self-assessment.&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;This is why after all these years I still do testing and experiments to see how new technologies work and how I might be able to leverage them on projects. I will admit I sometimes do not like looking at things I do not do well, because it is hard work, and who does not like feeling good about something rather than feel like you do not know something? In the long run I become better equipped to handle projects and problems that may arise in projects with new knowledge. A good example of this was when I first looked at Fortinet. After many years of Cisco knowledge and training I saw the world as Cisco did in design and implementation. Some things that Fortinet did was very different than Cisco. However, I have found that Fortinet has helped me to understand the world in a different light. It also showed me that although Cisco followed standards, their implementation interprets the standard a particular way and other implementations may not interpret the standard the same way. This has opened my mind to how standards can be implemented so differently and still match the standard. There are parts of networking I now understand on completely different levels than I use to.&lt;/p&gt;  &lt;p&gt;The question is how do you know if someone is an expert. Going back to &lt;i&gt;The Making of an Expert&lt;/i&gt;, there are three tests that are presented that will help determine if someone might be considered an expert. First test is: does that person performance consistently superior to that of their expert peers. The second test: is this person able to produces concrete results. The third, and final test, can that person’s results be replicated and measured. I really like these tests because they set goals to work towards each and every day. I have never heard anything so powerful as Denzel Washington’s speech where he talked about goals: &lt;a href="https://www.youtube.com/watch?v=uM-MqsWjQd4"&gt;https://www.youtube.com/watch?v=uM-MqsWjQd4&lt;/a&gt;. At about 2:25 time mark in this video Denzel starts talking about having goals. He stated there are many types of goals that can work all the way down to having a goal each day. I have set goals for everyday of my professional life and it was so great to hear someone like Denzel state what should be obvious but is not so obvious to so many.&lt;/p&gt;  &lt;p&gt;I have found, that in my daily life, knowing what I know is not important, because I already know it. What is important is knowing what I do not know so I know when to get help. I think this is the most important action I can take on any problem I am trying to address. I am not as concern about titles as I am action. I really do not care if someone calls me an expert or not. What I care about is did I get the job done. In the long run I will bet that when all is said and done the only thing that will be important is if the project was completed in a timely matter and to the proper standards.&lt;/p&gt;  &lt;p&gt;I do know that I cannot do the best possible job without the most available information. This is why I am so driven to read and study all aspects of what I have to do each day. All this studying has deepened and widen my knowledgebase to the point that I can have the confidence to take on any tasks I have been asked to do. Does this make me an expert? I do not know I will leave that to others to determine. I just know that at the end of each day I feel I have done the best I can and that is a great reward.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/secure-communications-and-compliance/</id>
    <title>Secure Communications and Compliance</title>
    <updated>2021-01-31T04:59:30Z</updated>
    <published>2021-01-31T04:59:30Z</published>
    <link href="http://worldaccordingtodave.com/blog/secure-communications-and-compliance/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="security" />
    <category term="business" />
    <content type="html">&lt;p&gt;Today it is common for the need to communicate as an individual to a business, a business to an individual, or an individual to an individual. In many cases this communication is done with E-Mail. The problem with E-Mail is that it is the same as sending a postcard in the mail to someone. Anyone who sees the postcard can read the message. In the same way anyone who can see the E-Mail can read the message on it because it is send in clear text. This makes it hard to send an E-Mail for only the recipient to see. However, it is not hard to secure this type of communication.&lt;/p&gt;  &lt;p&gt;Before starting any communication, you need to stop and think about what kind of information is being sent. In security terms we call this process data classification. Most of the information that is transferred from one person to another person can, most of the time, fall into one of two classifications; public and private. Private data can be referred to by different names, for example personally identifiable information (PII) or protected health information (PHI) are common terms used in some security standards. The definition for private information is well defined by security standards and tend to be rather obvious. The problem is harder to see when information is used from various sources and when all this information, from different sources, is put together, it gives away more information than it should. This is called data leakage. The real question is: What constitutes private information?&lt;/p&gt;  &lt;p&gt;Most security literature agree that there are only three pieces of information needed to take someone’s identity (&lt;a href="https://www.consumer.gov/articles/1015-avoiding-identity-theft"&gt;https://www.consumer.gov/articles/1015-avoiding-identity-theft&lt;/a&gt; and &lt;a href="https://www.makeuseof.com/tag/the-10-pieces-of-information-identity-thieves-are-looking-for/"&gt;https://www.makeuseof.com/tag/the-10-pieces-of-information-identity-thieves-are-looking-for/&lt;/a&gt;). This is your name, your birthday, and your social security number. Two the of three are very easy to find on the Internet. A social security number is the one that should be hard to find, however there is a strong indicator that this too is somewhere in the dark web at least. However, there are other pieces of information that can be used that may not be as obvious.&lt;/p&gt;  &lt;p&gt;Open-Source Intelligence, OSINT, opens up other information that can be used as well. Birth place is important because many security question lists include this question. Special dates such as: birthday wedding and others can be discovered by outsiders. I would include names and nick names for special people in your life. All this information can be used for security questions, or a PIN. I would encourage everyone to enter their name into a search engine, every so often, just to see what information about you is exposed. This will give you an idea of what can be found in OSINT. I do not mean for this to sound like we should just “throw in the towel” on our information and not worry about it. It just means we need to monitor all of our information.&lt;/p&gt;  &lt;p&gt;The following examples are of information that has to be transported from a business to an individual and from a business to a business and I will show how to identify the type of data and how it should be transported. The goal here is to open up how to look at data and how to assess the data from a security viewpoint.&lt;/p&gt;  &lt;p&gt;Example 1:&lt;/p&gt;  &lt;p&gt;A small healthcare provider needs to sent medical information to an individual. On &lt;a href="https://www.hipaajournal.com/what-is-protected-health-information/"&gt;https://www.hipaajournal.com/what-is-protected-health-information/&lt;/a&gt; it is stated:&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;Protected health information includes all individually identifiable health information, including demographic data, medical histories, test results, insurance information, and other information used to identify a patient or provide healthcare services or healthcare coverage.&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;If the information falls into any of these categories, then the data needs to be sent encrypted. There are a couple of ways to achieve this. One way is to have a Web Portal that patients can log into and get the information. For a small business a Web Portal may not be something the business can do. The question is how does this business meet this HIPAA requirement with the limited resources they have. The answer is to look at what tools are available to help. One way I would suggest is to use 7-Zip, &lt;a href="https://www.7-zip.org/"&gt;https://www.7-zip.org/&lt;/a&gt;. This is a free zip utility that can zip file with a password. These files can be open with any zip utility once the password is entered. Using this utility, the PHI can be secured with a password and the file is encrypted. At this point the only issue is the password. Since the communications will be done using E-Mail the password exchanged should be done out-of-band. What this means is that the exchange of the password should not be done by E-Mail since the information is being passed through E-Mail. This is not a problem since a phone call can solve this issue very easy. Once this process is completed not only is the communication secure, it complies with HIPAA. This is a very cost-effective process for smaller healthcare providers.&lt;/p&gt;  &lt;p&gt;Example 2:&lt;/p&gt;  &lt;p&gt;In this example a business needs to transport information to another business that is a service provider. For this example, we will assume that a retailer is out sourcing data for some data analytics work to be completed by another business. Since the retailer has a Point of Sale, POS, they process payments of many different types. One of the types of payment would be a credit/debit card. Once card data is processed, the retailer will be bound by Payment Card Industry Data Security Standard, PCI-DSS. This standard is enforced by banks and the card brands. Failure to meet this standard will incur penalties from the bank or the card brand; either way there is no way to avoid the penalty. The important data items with a card are: the card number, the card holder name, the expiry date, and the security code. In general, this is the data for fraudulent use of a card. For our example the retailer is using the data analytics company to help identify misuse of card numbers. In this case the data analytics company only needs the card number since they do not need anything else for their work. The retailer can export only the card number with the rest of the tender (payment) data. At this point the data analytics company does not have any real private information since a card number is just as set of numbers. Without the other data even a breach that reveals card numbers, the numbers are useless. If the data analytics company stores loyalty card numbers this make the card number even more unless since there is no way to tell the two type of card numbers apart. In this case there is no need for the data to be transferred encrypted. This will reduce the cost of data transfer and maintenance since all the extra encryption can be skipped. This is where some businesses do a bad job of looking at the data sent out and end up spending more money than necessary in maintenance. Please note if the customer data is included to the data analytics company, the card number may now take on a different data classification and may need to be secured more. This is why understanding the data classification on both sides is so important. Both businesses should be looking at what is needed from both sides, this will allow both to do the best possible job.&lt;/p&gt;  &lt;p&gt;The problem with many businesses is that compliance is seen as a check off box, when in reality compliance should be about better business operations. When the goal is to hit the compliance check box, a business is only waiting for disaster to happen. Remember a bad actor only needs to be successful once to create a real problem. The security process must be successful every time to block the bad actor. The odds are not in favor of the business.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/reasons-for-not-patching-systems/</id>
    <title>Reasons for Not Patching Systems</title>
    <updated>2021-01-27T01:41:53Z</updated>
    <published>2021-01-27T01:41:53Z</published>
    <link href="http://worldaccordingtodave.com/blog/reasons-for-not-patching-systems/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="security" />
    <category term="business" />
    <content type="html">&lt;p&gt;As I started another new class in security, one of the discussions we had was about patching systems. This is not a new subject and comes up in just about every security class I have ever taught. In fact, I have had this type of discussion at work. Every security standard and common practice states that systems needed to be patched. We only need to do a quick look at recent security breaches to see that many of them occur because systems are out-of-date or they are running obsolete software. For example, the WannaCry ransomware attack was effective when executed on Windows XP systems not newer Windows systems, &lt;a href="https://www.csoonline.com/article/3227906/what-is-wannacry-ransomware-how-does-it-infect-and-who-was-responsible.html"&gt;https://www.csoonline.com/article/3227906/what-is-wannacry-ransomware-how-does-it-infect-and-who-was-responsible.html&lt;/a&gt;. Microsoft end of life for Windows XP was in April 2014, &lt;a href="https://www.microsoft.com/en-us/microsoft-365/windows/end-of-windows-xp-support"&gt;https://www.microsoft.com/en-us/microsoft-365/windows/end-of-windows-xp-support&lt;/a&gt; and &lt;a href="https://www.bytesolutions.com/windows-xp-end-life-eol-retirement-dates/"&gt;https://www.bytesolutions.com/windows-xp-end-life-eol-retirement-dates/&lt;/a&gt;. Yet the WannaCry attack occurred in May 2017, &lt;a href="https://usa.kaspersky.com/resource-center/threats/ransomware-wannacry."&gt;https://usa.kaspersky.com/resource-center/threats/ransomware-wannacry.&lt;/a&gt; We have all heard of the Equifax breach in 2017, it too was executed against an unpatched system, &lt;a href="https://www.csoonline.com/article/2130877/the-biggest-data-breaches-of-the-21st-century.html"&gt;https://www.csoonline.com/article/2130877/the-biggest-data-breaches-of-the-21st-century.html&lt;/a&gt;. The list can go on and on. In fact, a CSO report showed that 60% of breaches involved unpatched systems, &lt;a href="https://www.csoonline.com/article/3153707/top-cybersecurity-facts-figures-and-statistics.html"&gt;https://www.csoonline.com/article/3153707/top-cybersecurity-facts-figures-and-statistics.html&lt;/a&gt;. &lt;/p&gt;  &lt;p&gt;Although it may be clear on why to patch and update systems, it can be rather deceptive on why systems are not patched. To make it clear, if a system is breached, it is no one’s fault except the business management. It is the business management that will make the hard decision on how much money can be spent on security and this decision will dictate what the implemented security system will look like. At the heart of this decision is risk. Risk is the major reason security systems exist. Without risk management, there is no reason to have a security system in place. The first time I read &lt;i&gt;Principles of Computer Security by Arthur Conklin and Greg White&lt;/i&gt; I ran across a great formula for calculating a value of security on an asset that helped make it clear on how risk can be assessed on a particular asset. Although Conklin and White called it the Operational Model of Computer Security, I have seen many similar versions in other books using other names. This model is defined as:&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;Protection = Prevention + (Detection + Response)&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;I have never understood the reason for the parentheses, but the basic equation is clear. This allows a cost of security to be assigned to an asset which now can be turned into a business case. As I always tell my students from this point there are three options a business can select from. First if the protection cost is less than the value of the asset, it almost always makes sense to implement the security item. However, if the protection cost is higher than the value of the asset, then the management can select one of two options. If the asset requires some sort of protection, because of a business reason, then the first option is to let the risk be managed by someone else. Another way to look at this is that the asset will be insured. Insurance is a great way to reduce the security cost. It also allows a business to pay a third-party business to assume some of the risk for the asset. The second option is for the management to accept the risk. In this case the business will just accept a loss if something happens to the asset. It will be management to determine what to do the asset.&lt;/p&gt;  &lt;p&gt;With all this background information it is time to look at the reasons that patches are not applied. There are many reasons not to apply patches, but the following ones are the ones I have seen in the places I have worked. To start with there is usually no patch management policy in place. Just about every security standard requires patches to be done, and in general there is no explicit patch management policy required. This leaves open many holes in how patch management can be done.&lt;/p&gt;  &lt;p&gt;It is clear that operating system updates are done automatically from the OS vendor. However, there is a risk that an OS update may break a mission critical system. This problem could be addressed very easy with a lab environment to match the production environment. Years ago, this would require extra hardware so many businesses would not spend the money for a lab. The issue becomes deploying an update into production without any testing. This is never a bad idea, until one of these patches takes down a mission critical system. Of course, this loss of a mission critical system causes major push back from management. When a patch fails and a major push back in made from management, the message to the staff is do not patch since we cannot test the patch first. This no patch process now increases the risk of a security breach. Today it seems inexcusable that a production like environment is not created. The need for extra hardware does not exist any more since virtual systems can be used to test. Even if a business does not have a large enough system for this virtual lab, there are many low-cost ways of creating a test environment in the cloud, then dumping the test environment when done testing.&lt;/p&gt;  &lt;p&gt;A common reason I have heard for not patching is if the system is not broken, do not fix it. I cannot put into words what I think when I hear this. That being said I also understand where it is coming from. If a patch as failed in the past and management has made a big push back over it, then there is good reason not to risk getting management involved. Remember if an updated fails just about everyone knows about the failure, but if the system works no one cares. A business I once worked for used a remote access server, instead of VPN connections, for all remote access to internal systems. At some point the remote access server was replaced with a new remote access server, but the old one was forgotten. This old remote access server was not updated and one day it was breached. This is another common mistake I have seen in the past because the old server must be left in place during the transition to the new server. At some point the old server is forgotten and after the transaction is complete the old server is not decommissioned or updated. This is why audits need to be done on a regular basis, but that would require not only documentation but time to do which cost money. This is an operational cost that management down plays to save money in the short term, but can lead to larger problems and expense in the future if there is a breach. The error here is that most business see audits only as a check box for compliance and if compliance is not needed it is not worth the cost.&lt;/p&gt;  &lt;p&gt;Another reason a system may not be update, along a similar line, is when the system is internally produced. It is not uncommon for a developed system to use third party tools in its development. It is also not uncommon to have constant pressure on moving development forward not going back to update things that are complete and working. The problem is the same for development as in system maintenance; some of these third-party tools need updating. With the pressure to meet demands of new features, updating tools can been seen as unnecessary work because the current version is just fine. However, there will be a time when all systems will reach an end of life, or need a patch, and that includes third party tools. At these times the development process takes a big hit to update any tools. The other issue is that sometimes the update of a tool is a breaking change. A breaking change is one where existing code has to be re-written to fit the new change because the original coding has been redesigned by the vendor. These fixes can take more time since the original intent of the code has to be understood before there can be an understanding of how to allow for the breaking changes to be fixed. I have seen tools out-of-date by as many as four major versions. In these cases, updates are not simple and most of the time will not easily intergrade back into the main source control branch while others are still making enhancements to the main source control branch. This issue introduces many risks to the code base and my experience has shown that more often than not there are few, if any unit tests, to identify side effects to a change and this complicates the update all the more.&lt;/p&gt;  &lt;p&gt;I have also seen where developers change code for a third-party tool to allow it to integrate into their system easily. This is a real problem because a breaking change, or any update, can affect the code change the developer made. This is why it is best to find a proper way to use a third-party tool rather than change the code to fix the tool to work the “way we want it to work”. Of course, figuring out how to use a tool properly may slow down development, but it is always better to use a tool as it is designed to be used.&lt;/p&gt;  &lt;p&gt;Often it is said that there is a lack of resources to handle patch management. On the surface this may seem real because, in general, IT seems to always be short of resources. If this is really true, the management team needs to find a way to correct this shortage issue; otherwise, it will be only a matter of time before there is a breach. However, I have seen that this lack of resources is created by a poor process. If patches are being handled manually of course it will take more resources. If this is the case then changes to the process should be made before new staff is added. I have seen many cases where there is no automated update process. This is very common in internally developed systems. If an update cannot be made by single click, then more work needs to be done to get a single click deployment. This is at the heart of deployment in DevOps.&lt;/p&gt;  &lt;p&gt;I have seen many systems that do an update have to be followed up by someone having to come in afterwards to update configuration files. This is not good for any update process and it leaves that component open to having update issues. In general, an update process should be able to update configuration files as well. The main problem to this process is identifying user changes to configuration verse update changes required. This is a problem that can be solved, but in most cases development timelines are too short to allow for this development. It is common that the update process completed as quick as possible and update problems can be addressed later. This “deal with the issue later” has become a bigger problem with Software-As-A-Service, SaaS, design. SaaS allows a business to control the deployed systems more so many businesses see updates as an internal support issue that can be done later. Again, this attitude goes against DevOps.&lt;/p&gt;  &lt;p&gt;In short, commitment to risk management on the highest management levels will inherently lead to a patch management policy that will be properly implemented; which in turn will truly reduce the risk of a security breach.&lt;/p&gt;  &lt;p&gt;Other References&lt;/p&gt;  &lt;p&gt;&lt;a href="https://deltarisk.com/blog/we-dont-need-no-stinking-patches-why-organizations-dont-patch/"&gt;https://deltarisk.com/blog/we-dont-need-no-stinking-patches-why-organizations-dont-patch/&lt;/a&gt;    &lt;br /&gt;&lt;a href="https://blog.automox.com/6-reasons-companies-dont-patch"&gt;https://blog.automox.com/6-reasons-companies-dont-patch&lt;/a&gt;    &lt;br /&gt;&lt;a href="https://www.csoonline.com/article/3025807/why-patching-is-still-a-problem-and-how-to-fix-it.html"&gt;https://www.csoonline.com/article/3025807/why-patching-is-still-a-problem-and-how-to-fix-it.html&lt;/a&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/common-practices-or-best-practices/</id>
    <title>Common Practices or Best Practices?</title>
    <updated>2020-12-22T06:11:45Z</updated>
    <published>2020-12-22T06:11:45Z</published>
    <link href="http://worldaccordingtodave.com/blog/common-practices-or-best-practices/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="security" />
    <category term="business" />
    <content type="html">&lt;p&gt;With the events of the last few days over the SolarWinds breach, I just wanted to give my views about how this massive of a breach could have happened and go for so long unnoticed. I have spent my career in private business; although I did have a short contract, part time for six months, in a government setting once. I have seen all sides of how private businesses run. These experiences are what has led to these conclusions of how a breach this bad could have occurred.&lt;/p&gt;  &lt;p&gt;It is best to start with some background of the development lifecycle. There are many common practices in the development lifecycle and there are also some best practices. As odd as it may seem these two are not necessarily the same. It is the leadership of the business that will define when, or if, best practices will used. In general, best practices tend to come with a higher up-front cost, but lower long-term costs. This is not always an easy decision for leadership to make. It should be noted that the old way of development was to manage projects. The new way of development is to manage risk. This is what Barry Boehm talked about in his spiral model of software development which is what has led to today’s agile development methodologies (&lt;a href="http://www.cse.msu.edu/~cse435/Homework/HW3/boehm.pdf"&gt;http://www.cse.msu.edu/~cse435/Homework/HW3/boehm.pdf&lt;/a&gt;). The problem is that many businesses want to manage projects, like they always have, but want to appear to be using new methods. In these cases, businesses use terms as buzz words rather than an actually thought process. When risk defines the development process what if questions become tools of improvement, not a stumbling stone to the timeline.&lt;/p&gt;  &lt;p&gt;There should be some work done before the implementation is started such system definition and requirements. However, in many businesses I have been around there is little time, if any, allotted to these tasks. This leaves the implementation process at a bad starting point. I have found it is not unusual to start with incomplete, or few, requirements at the start of implementation. It is also common to have some developers that just want to get done fast so they can go to the next thing. All these events tend to lead to same outcome, whatever is easiest wins. Rarely, if at all, do these events lead to what is best for the system or customer.&lt;/p&gt;  &lt;p&gt;During implementation automated unit tests should be developed. This is known as Test Driven Development, or TDD. I have been an advocate TDD since the late 1990’s and early 2000’s. Back then it was call Test First Programming, TFP, and was a key part of eXtreme Programming, XP. It took a while to really see the reason behind TDD, but that came to light when I started seeing how little time it took to make changes to a system and have confidence that there will be no side effects to change. The controversy about TDD improving quality will probably always be around since quality does not have a common definition. However, I have found that this improvement in time to fix issues or do enhancements, cannot be denied. This is why when I first found the study out of China testing process improvement, I was so excited. Like many other studies, on TDD, there was no real data showing improvement from the practice of TDD. But there was a side note to this study and that side note reinforced that I was seeing for years. The side note to this study was that by having unit tests the time to fix issues dropped to a fraction of time to fix issues without unit tests (&lt;a href="https://www.researchgate.net/publication/221592883_Test_Driven_Development_and_Software_Process_Improvement_in_China"&gt;https://www.researchgate.net/publication/221592883_Test_Driven_Development_and_Software_Process_Improvement_in_China&lt;/a&gt;). Not only had I seen the timeline improvement, but there was now an empirical study showing the same outcome. Even in light of all of this data I still see development teams doing little or no unit tests. The larger issue is that even within the same organization there can be this inconsistent behavior. For example, my team is very disciplined in doing unit tests, but none of the other teams in our organization are.&lt;/p&gt;  &lt;p&gt;As part of source code management there should be a version control system. This is where changes to the code base can be tracked. It also allows only certain users to log in code. The biggest risk here is code being checked in that has issues with it. These types of problems can occur when one developer is working with out-of-date code and then checks in their changes. The process I require of my team is to compare the code being checked in with the code already in version control. There have be many times we have spotted problems before the code is even checked in.&lt;/p&gt;  &lt;p&gt;At this point the system needs to be built. Following current DevOps practices there should be a continuous integration, CI, system in place. This is where the system is built and unit tests are run when the code is checked in. In &lt;i&gt;Mythical Man Month&lt;/i&gt; by Frederick Brooks, there is an essay about plan to build it twice. Years ago, that was true since development would gather all the requirements up front before implementation even began. This would lead to system that would not meet the user’s requirement due to the time delay between requirements and implementation completion. Brooks thought was do the system once and learn what was wrong when the implementation was complete, then do it a second time to get it right. Needless to say, the cost of development makes this an unrealistic approach. Yet the agile thought process really takes Brooks idea and put it on steroids. In the agile world view the plan is to build it every day. This is the power of CI. It allows the whole development team to see issues with change immediately. Again, at the heart of this success is unit testing. The lack of unit testing does not allow CI to be a benefit, it only allows for a business to look like they are doing a best practice.&lt;/p&gt;  &lt;p&gt;Once the CI build is completed, the next step is a full QA verification. When this verification is complete the final step is a deployment build and push to production. If a DevOps process is being used, this push to production will most likely be a Blue/Green environment to allow a final look before flipping the switch for production. Many of the businesses I have been around do not do a Blue/Green DevOps deployment, they just push to production most of the time by a manual process.&lt;/p&gt;  &lt;p&gt;The lifecycle overview I have given is just to give a reference point to what has led to my conclusions. I do not believe that the original bad actor(s) was anticipating this type of success. In the Target breach it was discovered that Target was not the original target. It was a heating and air conditioning firm that was the target. Only after exploring around that firm network did the bad actors find the credentials for accessing Target (&lt;a href="https://krebsonsecurity.com/2015/09/inside-target-corp-days-after-2013-breach/"&gt;https://krebsonsecurity.com/2015/09/inside-target-corp-days-after-2013-breach/&lt;/a&gt;). I believe that this is what happened in this case. I believe that a phishing campaign was successful on a workstation, or work stations, and after accessed was gained it was discovered to be such a high value target. By the sophistication of the attack, it should be safe to assume it was a group not a single bad actor.&lt;/p&gt;  &lt;p&gt;What makes this attack a real concern is that it was a supply chain attack. A supply chain attack has a way of opening many other targets for the bad actors to access. This is due to the inherit trust relationship between a vendor and customer. Is also makes these other targets real victims since these businesses truly rely on their vendor to look out for their best interest as well.&lt;/p&gt;  &lt;p&gt;It has been stated that the code that was compromised was a SolarWinds DLL called SolarWinds.Orion.Core.BusinessLayer.dll (&lt;a href="https://digitalguardian.com/blog/solarwinds-hacked-used-potentially-massive-supply-chain-attack"&gt;https://digitalguardian.com/blog/solarwinds-hacked-used-potentially-massive-supply-chain-attack&lt;/a&gt;). A public filing with the SEC shows that the malware infected DLL was being delivered from March to June 2020 (&lt;a href="https://d18rn0p25nwr6d.cloudfront.net/CIK-0001739942/6dd04fe2-7d10-4632-89f1-eb8f932f6e94.pdf"&gt;https://d18rn0p25nwr6d.cloudfront.net/CIK-0001739942/6dd04fe2-7d10-4632-89f1-eb8f932f6e94.pdf&lt;/a&gt;). When I first heard about the DLL and the supply chain attack, I thought that it had come from a successful attack to again access to the development version control system and implanting malware into the code base. Success for this attack vector would be easy since I have seen very few teams and organizations constantly verify their code base for expected code only. If a bad actor could change code in the version control system it would most likely not be seen by the development. There are a few ways that these changes might be seen. One way would be done by updating third party libraries. There are times these update cause what we call &lt;i&gt;breaking changes&lt;/i&gt;. These breaking changes would force refactoring, or rewriting, of code due to breaking changes. During this type of an update changes can force inspection of code that is not commonly looked at. Most development teams will not keep up with updates of these third-party libraries because these updates take time away from other items on the timeline that are higher priority. I have seen third party libraires as far as five years out of date or more. This lack of updating can also allow other security issues to exists in a system. This is the type of updates that many organizations miss in their security posturing because most security-based updates center on operating systems and applications, not their own developed applications.&lt;/p&gt;  &lt;p&gt;In the same SEC filing from above, it is stated that the “vulnerability was not evident in the Orion Platform products’ source code but appears to have been inserted during the Orion software build process”. Going by this SEC filing it seems to be a much larger problem having the vulnerability introduced during the build process. By the name of the library (DLL) is seems to be an internal SolarWinds custom library. In DevOps operations all builds are pulled form version control at the time of the build unless the item is a third-party library that there is no code for. For this attack vector to be viable, the build system does not run a build of the code for all releases of the system. This is not a good practice since it relies on the concept of building only items changed. This approach to system builds has never align with most agile methodologies which has been more down the line of CI type build processes. Another way this injection, during the build process, could be successful is if internal code wrapped a third-party library and built them all as a single library. The problem with this is that when the third-party library changes the whole DLL would have to be changed. The whole goal of DLLs is to allow the libraries to be updated without rebuilding system specific code.&lt;/p&gt;  &lt;p&gt;One way to reduce &lt;i&gt;code injection&lt;/i&gt; would be simply have some sort of hash on every item in the system and have a nightly automated system verify all the items and report any failures with verification. Of course, these hashes would have to be securely stored and updated with new hashes when expected changes are made. The problem of this solution is the development time and of course someone to verify the nightly process. This is not that big of the problem looked at over the long term, but it may cost more up-front to get in place. It seems that this type of verification system was not in places for SolarWinds.&lt;/p&gt;  &lt;p&gt;The odd thing here is that the SEC filing states “The vulnerability has only been identified in updates to the OrionPlatform products delivered between March and June 2020”. This information opens another possible process concern. Before the rise of agile methodologies common practices was to have a release schedule. In these circumstances, there would regular releases or updates produced. Some businesses would set a schedule of releasing an update every quarter. So, after a release was developed and published the release would sit on a hard drive and sent out to customers as it was needed. It is not uncommon to never look at this image again after it has been created. Given this timeline it is possible that SolarWinds might still be using this release pattern. If that is the case it would take very little effort for a bad actor to insert a modified library into the release image and just let the vendor send out the malware for the bad actor. Software as a Service, SaaS, system delivery moves from this scheduled release pattern to a more fluid pattern of constant delivery. It seems that more and more systems are being delivered by this constant delivery pattern even if they are not SaaS type systems. NotePad ++ and Visual Studio are both good examples of desktop applications that use a constant delivery pattern.&lt;/p&gt;  &lt;p&gt;Taking the SEC filing at face value, the source code injection is a non-issue. However, the evidence indicates that there may be some very common build and release problems. Where development methodologies stand at this point in time, it does not seem to make sense not to be moving towards a DevOps process. Making this type of change will take time, but if the business leadership understands the advantages of DevOps and has committed to it, then the time, money, and resources needed to have a successful DevOps process will be applied and a successful DevOps process will come in time. The basic DevOps process requires the build process to pull code for each release (QA and final production) for every build. The behavior alone will reduce the risk of modified component being released since the images are constantly being built. Although it is not clear if there is a scheduled release pattern being used; if there is one being used it should be moved to constant delivery pattern. Couple a constant delivery pattern with builds that pull code from version control will reduce risk of modified code getting into the build as well.&lt;/p&gt;  &lt;p&gt;To be clear I have no idea of what the internal processes are at SolarWinds, I am only looking through the lens of what I have seen before. Most of these types of issues seem to always relate back to the defined priorities. The actions seen by SolarWinds seem to point to practices are that common, but not best practices. I cannot help but think, if some of the common practices were replaced by best practices if SolarWinds could have avoided this whole issue and the impact it will have on SolarWinds’s future.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/fact-or-fiction/</id>
    <title>Fact or Fiction?</title>
    <updated>2020-11-18T06:46:37Z</updated>
    <published>2020-11-18T06:46:36Z</published>
    <link href="http://worldaccordingtodave.com/blog/fact-or-fiction/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="info" />
    <content type="html">&lt;p&gt;I am often amazed at some of the things I see and read on the Internet. I do not believe, in general, that people are stupid. I do believe, that as a general rule, people will report information based off of their world view. I think that “truth” is what lies between different stories. Each person will see the “truth” that fits into their view of events.&lt;/p&gt;  &lt;p&gt;An example of this was shown in an episode of Matlock. In this particular episode a witness was on the stand as Matlock was questioning them about a murder. The witness was sure they saw a woman who was the murder. During questioning someone in the courtroom made a small fuss, that caught everyone’s attention, and then quickly left the courtroom. Matlock asked the witness if that was a man or woman that left the courtroom. The witness said it was a woman. Matlock brought the person back into the courtroom and after removing the head covering that looks like a woman hat it was in fact a man. Matlock did this to prove that the killer may have been a man not a woman. I highly doubt that this would happen in real life, but it does make a point that what someone believes is the truth comes from their presuppositions.&lt;/p&gt;  &lt;p&gt;I am not saying that this is wrong, just that this is a fact of human nature and we need to be sensitive to this issue. I believe that some people will take advantage of others just to further their own selfish goals. I think these people can do this because they find others who want to believe in something and give them what they want no matter what the real facts are. Giving something to someone, that wants it, not only makes that person feel good; it leaves them open to the giver to start to influence that person. This influence can grow over time, and that I see as an ethical issue. It is unethical to create this influence just to get what you want out of that person later on.&lt;/p&gt;  &lt;p&gt;Just recently G News reported “Watermark on Ballots Reveals Alleged Massive Voter Fraud in 2020 Election” (&lt;a href="https://gnews.org/545074/"&gt;https://gnews.org/545074/&lt;/a&gt;). I am not stating one way or another my view here. What I want to look at is can technology actually do what this article is stating. This article opens with a statement that ballots were printed with “an invisible, unbreakable code watermark and registered on a Quantum Blockchain System”. To this point I have not been able to find if this quantum blockchain system exists. I was only able to find a paper about quantum blockchains that was published in the Journal of Quantum Computing Vol1, no.2, 2019 (&lt;a href="https://localhost:44316/Posts/files/Quantum%20Blockchain%20A%20Decentralized,%20Encrypted%20and%20Distributed%20Database%20Based%20on%20Quantum%20Mechanics_637412787968101110.pdf"&gt;file:///C:/Users/DaveD/AppData/Local/Temp/Quantum%20Blockchain%20A%20Decentralized,%20Encrypted%20and%20Distributed%20Database%20Based%20on%20Quantum%20Mechanics.pdf&lt;/a&gt;), but this does not mean this system exists or does not exist. I do not have the background to talk much about this subject. However, there are some basic points of this that can be addressed. The assumption that all ballots can have an invisible watermark, is a problem in itself. In an NPR article of November 6, 2016 it was stated that there is no federal ballot design authority and that ballots are designed by a combination of local election officials and their printers (&lt;a href="https://www.npr.org/2016/11/06/500678100/the-art-of-the-vote-who-designs-the-ballots-we-cast"&gt;https://www.npr.org/2016/11/06/500678100/the-art-of-the-vote-who-designs-the-ballots-we-cast&lt;/a&gt;). This fact alone calls into question the reliability of the facts stated in this story.&lt;/p&gt;  &lt;p&gt;In the middle of the of this article there was a link “Trump Win Validated by Quantum Blockchain System Recount of Votes” that directs to the page “Trump Win Validated by Quantum Blockchain System Recount of Votes” (&lt;a href="https://beforeitsnews.com/politics/2020/11/trump-win-validated-by-quantum-blockchain-system-recount-of-votes-3217468.html"&gt;https://beforeitsnews.com/politics/2020/11/trump-win-validated-by-quantum-blockchain-system-recount-of-votes-3217468.html&lt;/a&gt;). This article starts out the same as the G News article, however, this article contains a fact that cannot even exist. It states “In addition to the watermark these official ballots also contained ink made of corn, which created an electronic radiation circuit ID that could trace the location of that ballot through GPS transmission. In other words, they could trace if the ballot was filled out by the person named on the ballot.” The first thing to note is that to have an electronic radiation circuit ID there would have to be a means to store that data. Paper cannot store electronic data that is just a fact of life. Second to store electronic data their has to be some electronics somewhere. I am not saying this cannot be done, but if it was done there would have to be some sort of chip on the ballot. This again could only be done nation wide if there was a nation-wide ballot maker which is not true as seen by the NPR article.&lt;/p&gt;  &lt;p&gt;If you have ever heard of Def Con you know that it is a very large annual event of hackers and security professionals. One of the best talks I have seen from Def Con came from Def Con 22. It was called “Weaponizing Your Pets: The War Kitteh and the Denial of Service Dog” (&lt;a href="https://www.youtube.com/watch?v=DMNSvHswljM"&gt;https://www.youtube.com/watch?v=DMNSvHswljM&lt;/a&gt;). This was my first, although passing by explanation, of how GPS works. In this talk Gene Bransfield talked about what it took to get GPS to work. That was the first time I heard you needed three satellites for GPS to work. Of course, my first thought was GPS uses Triangulation. I soon found out that was not right, it uses Trilateration. For a good explanation of this please look at: &lt;a href="https://gisgeography.com/trilateration-triangulation-gps/"&gt;https://gisgeography.com/trilateration-triangulation-gps/&lt;/a&gt;. Basically, all these references are to show that there is no way a ballot can do this type of tracking. To use GPS there would have to be a small computer and a ballot cannot do that. Yet there seem to be people that are trying to convince others that this science fiction idea is real.&lt;/p&gt;  &lt;p&gt;It is no good to society to prey on other people’s feelings. Presenting these facts, as if they are real, does no good for anyone. What would be good is if we, as a society, finally realize and accept, that no one person has the complete story. That the real store lies between all the stories and more importantly; we may not have the complete truth. That does not mean we do not have some of the truth. Likewise, others may have just as much of the truth as well. We have to be able to see that none of us is perfect so we all have a presupposition where all our views start at.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/who-is-on-the-phone/</id>
    <title>Who Is On The Phone?</title>
    <updated>2020-11-15T19:47:51Z</updated>
    <published>2020-11-15T19:47:51Z</published>
    <link href="http://worldaccordingtodave.com/blog/who-is-on-the-phone/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="Security" />
    <content type="html">&lt;p&gt;When I was in college, I remember in my general psychology class, studying an idea that people are basically good. I believe this was Carl Rogers but I do not honestly remember. At that time, I had a hard time with this concept. If, in general, we agree that we should respect our parents, do not murder, do not sleep around, do not steal, do not lie, do not be envious of others, then why is it that mankind tends to fail more often than succeed at these? After all who has not done some of these things at one time or another? How often do we excuse ourselves of these transgressions, but find fault in others who have failed them as well?&lt;/p&gt;  &lt;p&gt;From my experiences, especially in security, I tend not to see our human nature in such an optimistic light. I tend to believe that when push comes to shove, human nature will look out for number one over looking out for others. Sure, there may be exceptions to this, but in general this is the behavior I have observe more often than not.&lt;/p&gt;  &lt;p&gt;If this is sounding harsh or you think I am over reacting, I just achieved my goal. I have your attention now. I am not as over reacting as this start of this might seem, but if you do not look carefully at some events, you may become a statistic. Dataflog reported that in 2017 there was a 79% success rate in social engineering attacks. It also reported that there was about 83%, of all companies looked at, that had experienced phishing attacks. This report also showed that 49% experienced SMS and voice phishing. (&lt;a href="https://datafloq.com/read/social-engineering-attacks-numbers-cost/6068"&gt;https://datafloq.com/read/social-engineering-attacks-numbers-cost/6068&lt;/a&gt;) Just a couple of years ago there was a large number of arrests in India for social engineering calls to the U.S. and Canada. (&lt;a href="https://securityboulevard.com/2018/12/126-arrests-the-emergence-of-indias-cyber-crime-detectives-fighting-call-center-scams/"&gt;https://securityboulevard.com/2018/12/126-arrests-the-emergence-of-indias-cyber-crime-detectives-fighting-call-center-scams/&lt;/a&gt;) At the levels these events occur, and the number of people involved, does it look like humans are basically good? After all is there ever a time stealing is a good thing?&lt;/p&gt;  &lt;p&gt;Social Engineering is a big business, costing consumers lots of money each year. The only way to reduce this is by spending more time educating people on how to spot these bad actors and how to walk away before they become a victim. The first rule is simple: banks, credit cards, Apple, ISPs, and other businesses that have access to part of your finances will not send an E-Mail, or call you the phone, just to say your account is locked and you need to sign in to unlock it. Ignore these calls and E-Mails. The new approach I have been seeing more and more of, is a Robo call that states your Windows license is out of date, or the business you purchased your security software is closed and they want to get you a refund. You are asked to press 1 to talk to someone about the problem. If you get this message just hang up do not waste your time. Also, the IRS will never call you for back taxes or have the police call you so do not talk to these people either.&lt;/p&gt;  &lt;p&gt;If a person should get you on the phone, ask specific questions of the person. For example, if I get a call about my Windows license the first question I ask is “Are you from Microsoft?”. Many times I will be corrected and told they are from Windows. If this is said hang up, only Microsoft has Windows if they are not going to say they are from Microsoft they just confirmed they are a bad actor of some type. Sometimes I will be told yes, it is Microsoft. Since I always check my caller ID the phone number is never from Microsoft so I ask about why the caller ID did not show Microsoft. With these questions the person on the other end of the line may hang up. If they don’t and continue to insist, they are from Microsoft I ask for a phone number that I can call back on. If they give me a number it has never been a real number, but it gives me a way to get off the phone. The important thing is that I play this game because I am following good security practices and I hope that the person on the other end is paying attention. I am always hoping that the caller may one day figure out what they are doing is wrong and they will quit. I know this may sound odd, but I see my job as planting the seed, I cannot grow or harvest that seed but I hope that one day that will happen.&lt;/p&gt;  &lt;p&gt;Another social engineering attack I have started seeing again is someone calling to be my grandson or granddaughter. This very funny since I have no grandchildren. However, from what I have been reading this is a very common call especially for the older generation. I have also read many stories on high losses to this attack vector. This is another time where someone is on the phone, so spent time verifying the caller. Ask the caller specific questions such as what is my favorite color. These questions show you are verifying information you are given.&lt;/p&gt;  &lt;p&gt;Another way to identify these social engineers, is if they ask you to purchase gift cards. Understand, if you are being asked to get a gift card, there is no way that is a real business. WALK A WAY. Never buy a gift card just to follow instructions of someone on the phone.&lt;/p&gt;  &lt;p&gt;Keep in mind these social engineers will always want to do something right now. If you ask them to wait a couple of days, they will push you not to wait. This is another key indicator that this is not real, but a bad actor. In short, if you are not sure just hang up and end the call.&lt;/p&gt;  &lt;p&gt;My work in security has shown me that I cannot take people I do not know at face value. I am constantly reminded that many people are out there trying to steal if they can get away with it. As much as I would like to believe that people are basically good; the actions I see, from people, make be believe that many people look out for themselves first and I cannot see how that is good in any form.&lt;/p&gt;  &lt;p&gt;In short, if something seems out of place, it most likely is. There is nothing wrong with trying to verify anything someone tells you on the phone. Most security expects will tell you if a business calls out of the blue it is best not to say anything to the caller; until you verify that the call is from the business they claim to be. The best way to do this is get a phone number, use a reverse phone number look up to see if the number you were given is from that business. Then and only they call the number back. In any other case forget the call ever happened. Doing these things will help to keep you from becoming another statistic.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/is-honesty-the-best/</id>
    <title>Is Honesty the Best?</title>
    <updated>2020-11-03T19:11:43Z</updated>
    <published>2020-11-03T19:11:43Z</published>
    <link href="http://worldaccordingtodave.com/blog/is-honesty-the-best/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="business" />
    <category term="info" />
    <content type="html">&lt;p&gt;I have always said that it is better to avoid a black eye than to clear it up afterwards. What I mean by this is that it is best to think out all the negative consequences to a decision so the impact of the decision can be clearly seen. I am amazed is how often businesses, and people, do not clearly think about what is said before it is said.&lt;/p&gt;  &lt;p&gt;On October 28, 2020 Jack Dorsey, CEO of Twitter, appeared before a Senate Commerce Committee. When he was asked by Sen. Ted Cruz if Twitter has the ability to influence the election, Mr. Dorsey gave a simple one-word answer: “No”. At that moment I thought how could this business leader even think for a second that people would just agree with that answer. This answer goes against all common sense and evidence. If social media cannot influence people then what was the reason for &lt;i&gt;The Consumer Review Fairness Act&lt;/i&gt;? This act was created and passed for the simple reason that people should be able to post their reviews on line about a business without a fear of punishment. Businesses were acting against people who posted negative reviews about their business because these reviews were hurting the business. These actions alone prove the influence of social media. Yet Mr. Dorsey answer does not align with the events we have seen in action.&lt;/p&gt;  &lt;p&gt;We have had a similar experience in our lives. Cardinal Health produces a feeding pump call the Joey pump for people who cannot take in enough food my mouth. We use this pump for our son and have for many years. In mid-2019, we started having problem getting feeding bags for this pump. At the beginning of 2020 it was announced that we would not be getting bags because of the Covid-19 outbreak and the strain that it was putting on the supply chain. We were told that is due to the bags being sent to the hospitals first rather than home users. When this statement was made there was an obvious fact that was left out. The supply chain issues of these bags started long before the Covid-19 outbreak. This announcement also seemed like a business wanted people just to go along with what was stated without any thought to common sense and intelligence.&lt;/p&gt;  &lt;p&gt;One of the most important characteristics a business, or person, has is integrity. When statements are made that assume, or make some one feel like, they are stupid there is a lost of integrity. When there is a lost of integrity it will take more than a single action to rebuilt what was lost. An article from entrepreneur.com put it very clear: &lt;i&gt;“Integrity means telling the truth even if the truth is ugly. Better to be honest than to delude others, because then you are probably deluding yourself, too.”&lt;/i&gt; (&lt;a href="https://www.entrepreneur.com/article/282957"&gt;https://www.entrepreneur.com/article/282957&lt;/a&gt;) I very much agree with this, but I also know that all decisions have consequences good and bad. It is these consequences that need to be considered as well. When a decision, that has been made, appears to be a bad decision there may be some liability issues that come with that decision. This is why not all aspects of a decision need to be stated. In the movie Clear and Present Danger, and I believe it was in the book as well, Jack Ryan advises the President that when asked if they were acquaintance he should say they are good friends. If the President is asked if they are friends he should say they are life long friends. Jack Ryan’s reasoning was to give the news no where to go. This is a lot to this thought process. It may be wise just to give an answer that closes down the conversation.&lt;/p&gt;  &lt;p&gt;Going back to Jack Dorsey and the Senate committee, what if Jack Dorsey’s answer to Sen. Cruz was “it’s possible”? Would the senator been able to continue to ridicule Mr. Dorsey and Twitter for their behavior like he did? What if Cardinal Health just stated they were aware of the supply chain issues and they were working to resolve them as fast as they could. Would these approaches change the perceptions of the events? I tend to think they would. They would close down some of the issues that were being raised without opening more questions. They would make people feel more like the issues were being addressed rather than being given a line of crap.&lt;/p&gt;  &lt;p&gt;Just to be clear, this approach is not an easy path. To be able to defuse issues the starting point has to be: we are humans and we make mistakes; but we will accept our mistakes not as failures, but as learning lessons to do better next time. We know what our intensions are and want everyone to use that as the guideline, but we forget that we can only be judged by our behaviors.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>http://worldaccordingtodave.com/blog/security-poor-implementation-ideas/</id>
    <title>Security, Poor Implementation ideas.</title>
    <updated>2009-10-27T00:55:00Z</updated>
    <published>2009-10-27T00:55:00Z</published>
    <link href="http://worldaccordingtodave.com/blog/security-poor-implementation-ideas/" />
    <author>
      <name>test@example.com</name>
      <email>David Demland</email>
    </author>
    <category term="Business" />
    <category term="Info" />
    <content type="html">&lt;p&gt;While studying security issues it is not uncommon to hear authors tout their view of the best possible security that could be implemented for a business. There is nothing wrong with these ideas, however I have experienced many times where this goal has created major issues within a business. I believe that IT exists to help push a business forward and to the next level. Two of the major facets of pushing a business forward and upward are productivity and level of service. Both of these areas will affect the bottom line of a business. Low productivity adds a higher cost to product production which will increase cost to the business and to the business customers. Slow response times to customers’ needs will hinder the productivity of that customer which will in turn affect perception of the overall business. It would be one thing if all IT customers were external, but IT also has to work with internal customers as part of their daily operations. These internal customers are the customers that help the business operator each and every day and thus have a heavy impact to the overall health of the business. Here are a few examples where IT has tried too hard to achieve highly secure systems, but affect overall business operations.&lt;/p&gt;  &lt;p&gt;It is not uncommon for a security implementation to include locking out an account after three to five failed attempts and changing passwords every sixty to ninety days. This is good and most every security author and book will support this concept to reduce success rates of brute force, or password, attacks. However, if your business has remote staff, this could lead to a problem. If a user should lock the account, or have a laptop that is not used as much as their desktop, login problems will occur. This will lead to the issue “how do you fix a system where the user is not technical and IT has no access to the system”. Just a side note: even if the system connects to corporate network using a VPN, how do you get the system on the VPN when it cannot be logged into? This is a real problem. One solution could be to pack the system up and send it to the corporate office for IT to fix and send it back. The problem to this solution is that this leads to long down times for the team member and any customer that team member is working with. A second solution could be to pay for a local computer repair business to fix the system. This will add higher cost to the fix, but the down time is lower. A third solution could be to give the administrator password to the system. How sound is this option since the highest security risk comes from internal people? A forth options would be to set all remote users into a group and allow that groups to have full access to the system including an administrator password that does not affect other systems on the network. This way IT help desk personal can help talk through just about any steps that are needed to get the system and user back on line.&lt;/p&gt;  &lt;p&gt;Looking at E-Mail, there are some IT professionals that will lock down E-Mail so that it can only be retrieved on the local network and not allow a POP retrieval. A reason might be given that E-Mail may contain sensitive information that has to be kept on the corporate E-Mail server. How about giving credit to non-technical users; most of which understand that E-Mail is the same thing as a post card. Even E-Mail travelling through the Internet can be read by anyone intercepting the E-Mail packets because they are clear text. How does this make our profession look when we as experts make such stupid comments? Today hindering access to E-Mail is like saying that the business does not need its customers. With all the communications today being done through E-Mail can we afford to look this foolish? Is it not possible for a user to forward sensitive E-Mail to others just by forwarding the E-Mail? Of course they can. There is no technology that will close the barn door before the horses get out. The only defense a business can have is to keep copies of all E-Mail traffic through the E-Mail server and audit the E-Mail periodically. This should be the process already in place to protect the business against improper use of E-Mail.&lt;/p&gt;  &lt;p&gt;These are just a couple of the ways that we IT professionals may try to look like we are very security minded, yet fail to benefit the business. In fact a best case situation is that we just cost productive. The worst case situation we lost customers, or build a bad service reputation. Can we afford to do either?&lt;/p&gt;  &lt;p&gt;Keep in mind, as professionals; we need to serve both our business and our business customers. If the goal is to have the most secure system, then the only way to do that is to lock the system behind heavily monitored doors and walls and remove external access to the system. Not only will this be the most secure system we can make, it is also the most unless system. How good is a system that cannot interface to any other system, or person? Would this type of system benefit your business?&lt;/p&gt;</content>
  </entry></feed>