Are your passwords strong enough to resist a brute force attack?

Passwords are just about dead. Many systems now offer “two factor identification.” You give them your cell phone number and you have to use both a password and a code number sent to  the phone for your log in.  But passwords continue. They are easy for administrators. They are part of the common culture.

Steve Gibson has the engineer’s “knack.” (See the Dilbert video here.) His company, Gibson Research Corporation (here), sells a wide range of computer security products and services. He also offers many for free. Among the freebies is Haystack: How Big is Your Haystack – and how well is your needle hidden? (here)  This utility provides a metric for measuring password security.

It is pretty easy to do yourself, if you like arithmetic. 26 upper case letters, 26 lower case, 10 digits, 33 characters (with the space) for 95 printable ASCII characters in the common set.  So, if you have an 8-character password that is 95 to the 8th power possible combinations: 6.634 times 10 to the 15th power or over 6-and-a-half quadrillion. If you could try a million guesses a second, it would take 6.5 billion seconds or just over 200 years. (60 seconds/minute * 60 minutes/hour * 24 hours/day * 365.25 days / year* 200 years =6.3 billion .)

Gibson Research makes all of that automatic. Just key in your password, and it tells you how long it would take to crack.

Cracking passwords is a “routine activity” for a hacker. They have tools.  At one meet-up for hackers, the speaker told us, “If you have to use brute force, you are not thinking.”  They do not type in a million guesses per second, of course. They have programs to do that. Also, most websites just do not allow that kind of traffic: you cannot do a million guesses per second. What the hackers do is break in to a site, such as Target, Home Depot, LinkedIn, or eHarmony, download all of the log files, and then, on their own time, let their software attack the data offline.

Also, hackers do not use the same computers that you and I do. They start with gaming machines because the processors in those are built for high-speed calculation. They then gang those multiple processors to create massively parallel computers.  The calculators from GRC show the likely outcome for brute force by both a “regular” computer and a “massive cracking array.”

If someone got hired today at a typical midrange American corporation, their password might just be January2016. If, like most of us, they think that are really clever, it ends with an exclamation point: January2016! Hackers have databases of these. They start with standard dictionaries, and add to them all of the known passwords that they discover.

One common recommendation is to take the first letters of a phrase known only to you and personal only to you. My mother had naturally red hair for most of her life. She was born in 1929 and passed in 2012. So, “My mother’s red hair came from a bottle” becomes mmrhcfab19292012. According to Gibson Research, brute force guessing with a massive cracking array would take over 26 centuries.

Gioachino Rossini premiered his opera, William Tell, in 1829. “William & Tell = 1829” would take a massive parallel cracking machine about 1 million trillion centuries to guess. On the other hand, a “false phrase” such as Five + One = 27 could not be done in under 1.5 million centuries.

TMAR Four 3c3c

Texas State Guard Maritime Regiment non-commissioned officers at leadership training.  Only the one on your far right is a real Marine.

Remember, however, that a dictionary attack will crack any common phrase.  With over 1.7 million veterans of the United States Marine Corp, someone—probably several hundred someones—has “Semper Fi” for a password. Don’t let that be you. A brute force attack would need only 39 minutes, but that is not necessary: a cracker’s dictionary should have “Semper Fi” in it already.

(Above, I said that cracking passwords is a “routine activity” for a hacker. “Routine activities” is the name of theory of crime.  Attributed to sociologists Marcus Felson and Lawrence E. Cohen, routine activities theory says that crime is what criminals do, independent of such “social causes” as poverty. (See Routine Activity Theory on Wikipedia here.) That certainly applies to password crackers. Like other white collar criminals, they are socially-advantaged sociopaths.  They are planfully competent, calculating their efforts against a selfish return.)

Beware; Honda Cares!

Posted: January 24, 2016 by IntentionalPrivacy in Historical and future use of technology
Tags:

I have watched the YouTube video “United Breaks Guitars” several times, and while it makes me laugh every time I see it, I have come to understand that the issue is really bigger.

“United Breaks Guitars” is the story of a man who hands over his Taylor guitar to United baggage and watches from inside the plane, helpless to protect it while United baggage handlers deliberately break it.

Stories like this often start with “I shoulda …” as if it is somehow our fault that we unwittingly entrusted someone whom we paid—yes, PAID—to treat us and our belongings with respect. Instead when they abuse our trust—when they lie, do not deliver on their promises, or worse, deliberately break something that has been committed to their care—we are supposed to accept it and move on with our lives.

I watched this video again while I was writing this article. I was thinking of words to substitute for the song lyrics to fit my recent problem with my new 2015 Honda Accord, which I purchased in September. Unfortunately, most of the words I thought of were not printable.

While I have bought a couple of new cars, they were practical and did not have any extra features. The only thing that I have ever purchased that cost more was a house. I fell in love with this car. It had amazing technology. It was a beautiful Obsidian Blue Pearl. The doors close with a very solid thunk. It drives great and it is comfortable. It has many other features that I enjoy.

However, it has some features that I do not enjoy. One such feature is that you cannot unlock the passenger door from outside the car with the key fob if the car is parked and running. I was told that was a safety feature. That one is annoying, but I can live with it. Other features are not so acceptable.

I am stuck in traffic every day for a couple of hours. I download audio books to my phone, and I was very happy to discover that I could hook my phone into the car’s Bluetooth and listen to my current book on my long commute. Unfortunately, if I receive a text message while my phone is connected, the text message replays every time I use my right turn signal until I turn the car off.

Imagine: When I get in my car after leaving work, I text my husband to tell him I am on my way home. I’m driving down the road and he texts me back “ok.” There are at least five right-hand turns on my route home. Ok … ok … ok … ok … OK!

The first time it happened, I almost drove off the road. The next time, I pulled off the road and tried to figure out how I could fix it. I work in technology and there must be some option I could change, I thought. As I explored the options, I decided the user interface was terrible and counterintuitive. I got the manual out; it did not explain the options at all. The manual actually only refers to the iPhone, but it does not explain the options there either.

But no, none of the available options made a difference in the car’s behavior.

I was sure there was something I was missing in the settings or maybe the dealership could install an update that would fix the problem. I drove to the dealership. I took a service writer for a ride in my car and let him experience the text message problem. He told me that it was supposed to work like that. My choices were to turn off the right-hand camera or not attach the phone to the car.

Spending that much on a car and not being able to use the features I bought it for seemed ridiculous to me.

Next, I talked to his boss, who also dismissed my issue and me.

My phone is a Samsung Galaxy S5 and my carrier is Verizon. Yes, they are both on Honda’s list of approved phones and carriers.

car-3

Then we discovered that my husband’s iPhone 5 does not connect at all, even though it is also on Honda’s approved list.

I went home and wrote a letter to Honda America. A month later I heard from “Crystal,” who said she would contact the dealership and then call me back. That was in early November, and I have not heard from her since.

The car has pale gray velour seat covers. I drink coffee in the car and I knew what those seats would look like in six months without stain repellent. I purchased the Auto Butler interior stain repellent as well as the exterior coating to protect it from the Texas sun.

As I was driving to work one morning, my coffee tipped over. Instead of the coffee beading up the way the loan officer had shown us so I could pull over and wipe it up, it soaked right in. I was furious.

I called the dealership and asked to speak to the General Manager. The switchboard told me it wasn’t convenient for him to talk to me. I told her that it wasn’t convenient for me to have spilled coffee all over the inside of my car either. She switched me over to the Service Director.

I explained my problems with the car.

He told me to bring it in and they would make it right.

That was the week before Thanksgiving. They did clean the seat (although I swear I can still see coffee stains). When I picked up the car, the new, pale green bathmats I use as seat protectors were wadded up on the floor with great big, greasy footprints on them.

The Service Director (I’ll call him “George”) gave me a 2015 Honda Accord loaner. The loaner—with a different user interface—did not have the text message issue.

They kept my car for two weeks, claiming they put in updates and reset everything. When I got the car back, George sent me a link that explained how to reset Bluetooth on my phone to fix connection issues. I applied the Verizon update to my phone that had come out the day before. Even though I did not have a connection issue, I deleted the HandsFree link from my phone. I followed the directions for resetting Bluetooth. Then I reinstalled the phone in the car.

It did not help.

Instead, the car had a new problem. I was listening to the radio and the Bluetooth on the phone was turned off. I got a text message, the car turned on Bluetooth and played the message. I turned on the right turn signal and the message replayed.

George told me the Honda engineer said that problems with phones happen because the phone model they work with three years before the car comes out is not the same phone that hooks up to the car. While I can understand that phone models change, the phone uses Bluetooth 4.0, and it is supposed to be a common standard.

I called George and said I wanted it fixed. Fix the car, replace it, or give me my money back. I said if the loaner did not have the problem, my car should not have an issue either.

He sighed and told me to bring it in again.

They had it for a week when George called to say that Honda had agreed to replace the audio unit. He made it sound like it cost several thousand dollars to replace and they were doing me a big favor. He finally called me five days later to say it was fixed and I could pick it up any time.

So at noon on Thursday, I drove to the dealership to trade the loaner for my car. Instead of taking five minutes to turn in the keys, get a receipt, and pick up my car, I sat there for 45 minutes. When I was called to the desk finally, a different service writer tried to hand me a bill for $561. I politely handed it backed to him and said it was supposed to be warranty work. He handed it back to me. I said that he had better check unless he wanted me to call my lawyer right then. Another twenty minutes went by. Magically the charges had disappeared when they handed me the receipt the second time. I finally got my keys and my car, and hooked the phone back into the car.

Did they go for a test drive with me to show me it was fixed? No. I got in the car and had someone send me a text message. Problem still there.

In the meantime, my husband had taken our 2005 Honda into the same Austin, Texas, dealership to get it inspected because the power steering was making a noise. They resealed the power steering pump, and replaced the valve cover gasket and the cam plug. When he picked the car up and drove away, the engine light came on. He took it back and they charged him another $65 to tell him that an additional $670 was needed to replace the spark plugs and the induction coils. He went to an auto parts store and picked up four spark plugs for $52. When he pulled out the spark plugs, he found two springs under one of the spark plugs and none under one of the others.

Technology is supposed to make your life easier, better, and safer. I would argue that this car does not make my life easier, better, or safer: its problems are annoying and distracting. I should not have such issues with a brand-new car. I should not have such customer service issues with the dealership either.

The warranty package I bought with this car is called “Honda Cares.” It sounds great!

Honda, do you care? If you do, you will fix my car!

In fact, you should fix both our cars.

 

 

Bleeding Data – South by Southwest workshop

Posted: August 30, 2015 by IntentionalPrivacy in First Steps, Personal safety, Privacy
Tags: ,

We put together a workshop proposal called “Bleeding Data: How to Stop Leaking Your Information” for SXSW Interactive. The workshop will help consumers understand data privacy issues. We will demonstrate some tools that are easy to use and free. Please create a login at SXSW and vote for our workshop! http://panelpicker.sxsw.com/vote/50060. Voting is open until September 10, 2015.

I get my hair cut at the local salon of a famous chain of beauty schools that stretches across the US. They are a subsidiary of a much larger, high-end beauty products conglomerate. I have gotten my hair cut at various locations for years. It’s a good value for the money, and the resulting hair cuts are at least as good as and often better than ones I have received at their full-price salons.

Friday, I called to schedule a haircut and a facial. The scheduler asked for my credit card number to reserve my appointment. I asked if this was a new policy. The scheduler said they only asked for a credit card number for services that had a large number of no-shows. I asked when my card was charged, and she tried valiantly to explain how it worked.

I declined to give her my card and asked her to set up an appointment only for the haircut.

The next day, when I went in for my hair cut, I asked for their written policy on storing credit card numbers:

  • How long is the card stored in their system?
  • Who has access to it and what can they see?
  • How and why is a transaction against my number authorized?
  • What other information are they storing with my credit card number? Name, address, phone number …
  • Are they using a third-party application or does a third party have access to my information?
  • Are they following the best practices (for example, encrypted databases and hashing card numbers) recommended by the Payment Card Security Standards Council, in particular, the Payment Application Data Security Standards, which are available from https://www.pcisecuritystandards.org/security_standards/index.php ?

The receptionist referred me to their call center, where I eventually spoke with a manager, who could not answer my questions. She promised to find out and email me the policy, which I have yet to see.

I mailed a letter to the executive chairman of the beauty products conglomerate and the manager of the local school. I am not going back unless they come up with a satisfactory policy. Any organization that stores credit card information should have a written policy that explains how they protect it, and it should be available on customer request. It is not only best practice from a Payment Card Industry point-of-view, but it avoids misunderstandings between customers, employees, and management.

I’ve been a customer for over 20 years. Privacy matters, data security matters, and if your organization doesn’t think enough of my business to adequately protect my information and be able to show me, I am going someplace that will. No matter how much I like your hair cuts.

I had an interesting experience last week (my life seems to be full of them!). I signed up to take a class that purported to give me a better understanding of what I was looking for in a career.

The first day of class the instructor gave us the URL for an application that he had developed to collect a considerable amount of information about each of us: likes, desires, Myers-Briggs profile, and results from other assessment tests. During the class break, I asked him why the application was not using HTTPS. He said it did, but it used a referrer. I looked at the code of the web site. Hmm, not that I could see.

When I got home, I loaded up Wireshark so I could watch the interaction of the packets with the application. The application definitely did not use HTTPS. I emailed the instructor. Oh, he said, there was a mistake in the documentation, and he gave me the “real” secure URL.

Ok, so this application is sending his clients’ first and last names, email addresses, passwords, and usernames in clear text across the Internet. Not a big deal, you say?

It is a big deal, because many people use the same usernames and passwords on their accounts around the Web. Then add in their email address and their personal information is owned by anyone sniffing packets on any unsecured network they might be using, such as an unsecured wireless network in a coffee shop, an apartment building, a dorm room ….

So, next—because I now had their “secure” website URL—I checked their website against http://www.netcraft.com/, https://www.ssllabs.com/ssltest/, and some other sites—all public information. According to these tests, the application was running Apache version 2.2.22, which was released on January 31, 2012, WordPress 3.6.1 (released on September 11, 2013), as well as PHP 5.2.17 (released on January 6, 2011). It is never a good idea to run old software versions, but old WordPress versions are notoriously insecure.

Please note: I am not recommending either of these websites or their products; I merely used them as a method to find information about the application I was examining.

Not only that, but the app used SSL2 and SSL3, so the encryption technology is archaic. Qualys SSL Labs gave the app an “F” for their encryption, and that was after he gave me the HTTPS address.

(“It was harder to implement the security than we thought it would be,” he said.)

Although I did not find out the Linux version running on the web server, based on my previous findings—which I confirmed with the application owner—I would be willing to bet that the operating system was also not current.

So, then I tried creating a profile. I made up first and last names, user name, and a test email from example.net (https://www.advomatic.com/blog/what-to-use-for-test-email-addresses). I tried “test” for a password, which worked. So, the app does not test for password complexity or length.

He asked me on the second day of class if I now felt more comfortable about entering my information in his application since it was using HTTPS. I said no; I said that his application was so insecure that it was embarrassing, that it appeared to me that they had completely disregarded any considerations about securely coding an application.

He said that they never considered the necessity of securing someone’s information because they were not collecting credit card information.

I said that with the amount of data they collected, a thief could impersonate someone easily. I reminded him that some people use the same usernames and passwords for several accounts, and with that information and an email account, any hijacker was in business.

Then he said that he was depending on someone he trusted to write the code securely.

Although I believe in trust, if it were my application, I would verify any claims of security.

I told him he was lucky someone had not hacked his website to serve up malware. I said that I was not an application penetration tester, but that I could hack his website and own his database in less than 24 hours. I said the only reason it would take me that long is because I would have to read up on how to do it.

I told him I would never feel comfortable entering my information in his application because of the breach of trust between his application and his users. I said that while most people would not care even if I explained why they should care, I have to care. It is my job. If my information was stolen because I entered it in an application that I knew was insecure, I could never work in information security again.

So, what should you look for before you enter your information in an application?

  1. Does the web site use HTTPS? HTTPS stands for Hypertext Transfer Protocol Secure; what that means is that the connection between you and the server is encrypted. If you cannot tell because the HTTPS part of the address is not showing, copy the web address into Notepad or Word, and look for HTTPS at the beginning of the address.
  2. Netcraft.com –  gives some basic information about the website you’re checking. You do not need to install their toolbar, just put the website name into the box below “What’s that site running?” about midway down the right-hand side.
  3.  Qualys SSL Labs tests the encryption (often known as SSL) configuration of a web server. I do not put my information in any web site that is not at least a “C.”
  4. Another thing you should be concerned about is a site that serves up malware: Here are some sites that check for malware:

http://google.com/safebrowsing/diagnostic?site=<site name here>

http://hosts-file.net/ — be sure to read their site classifications here

http://safeweb.norton.com/

  1. Do not enter any personal information in a site when using an insecure Wi-Fi connection, such as at a coffee shop or a hotel, just in case the site doesn’t have everything secured on its pages.

The Electronic Frontier Foundation (EFF) recently released a plug-in for Chrome and Firefox called Privacy Badger 1.0. A plug-in is a software module, which adds functionality, that can be loaded into a browser. What the Badger plug-in does is block trackers from spying on the web pages you visit.

Why should you care? Because Big Data companies track everything you do online, and what do they do with that data? One thing they do is analyze data to predict consumer behavior. Here are a couple of articles that explain some of the issues: “The Murky World of Third-Party Tracking” is a short overview, while the EFF has a three-part article called “How Online Tracking Companies Know Most of What You Do Online (and What Social Networks Are Doing to Help Them)” that while several years old, is very detailed.

The FTC has gotten involved as well. Here is a link to one of their papers called “Big Data: A Tool for Inclusion or Exclusion?

I loaded the Badger plug-in as soon as it came out, and I am amazed at the number of trackers it blocks (it does allow a few)! One CNN.com page I visited had over a hundred trackers blocked and a Huffington Post page had almost as many. I also run other plug-ins in Firefox (Ghostery, NoScript, AdBlock Plus, Lightbeam).

The Badger icon in the upper right-hand corner tells you how many are blocked.

The best thing about Badger is that it is very easy to use, unlike NoScript.

Give it a try, and let me know what you think.

As I do almost every day, I was looking through security news this morning. An article by Graham Cluley about a security issue—CERT CVE-2015-2865 —with the SwiftKey keyboard on Samsung Galaxy phones caught my eye. The security issue with the keyboard is because it updates itself automatically over an unencrypted HTTP connection instead of over HTTPS and does not verify the downloaded update. It cannot be uninstalled or disabled or replaced with a safer version from the Google Play store. Even if it is not the default keyboard on your phone, successful exploitation of this issue could allow a remote attacker to access your camera, microphone, GPS, install malware, or spy on you.

Samsung provided a firmware patch early this year to affected cell phone service providers.

What to do: Check with your cell phone service provider to see if the patch has been applied to your phone. I talked to Verizon this morning, and my phone does have the patch. Do not attach your phone an insecure Wi-Fi connection until you are sure you have the patch—which is not a good idea anyway.

~

An interesting article in Atlantic Monthly discusses purging data in online government and corporate (think insurance or Google) databases when it is two years old, since they cannot keep these online databases secure. I can see their point, but some of that information may actually be useful or even needed after two years. For instance, I would prefer that background checks were kept for longer than two years, although I would certainly like the information they contain to be secured.

Maybe archiving is a better idea instead of purging. It is interesting option, and it certainly deserves more thought.

~

Lastly, LastPass: I highly recommend password managers. I tried LastPass and it was not for me. I do not like the idea of storing my sensitive information in the cloud (for “cloud” think “someone else’s computer”), but it is very convenient. Most of the time, you achieve convenience by giving up some part of security.

LastPass announced a breach on Monday –not their first. They said that “LastPass account email addresses, password reminders, server per user salts, and authentication hashes were compromised.”

For mitigation: They have told their user community that they will require verification when a user logs in from a new device or IP address. In addition,

  1. You should change your master password, particularly if you have a weak password. If you used your master password on other sites, you should change those passwords as well.
  2. To make a strong password, make it long and strong. It should be at least 15 characters—longer is better—contain upper- and lowercase letters, digits, and symbols. It should not contain family, pet, or friend names, hobby or sports references,  birthdates, wedding anniversaries, or topics you blog about. Passphrases are a good idea, and you can make them even more secure by taking the first letter of each word of a long phrase that you will remember. For example:

    I love the Wizard of Oz! It was my favorite movie when I was a child.

    becomes

    IltWoO! IwmfmwIwac$

    Everywhere a letter is used a second time, substitute a numeral or symbol, and it will be difficult to crack:

    IltWo0! 1>mf3wi<@c$

  3. When you create a LastPass master password, it will ask you to create a reminder. Let’s say you took your childhood dog’s name, added the number “42,” and the color “blue” because he had a blue collar to make your new master password: osC@R-forty2-Blew! If your reminder is “dog 42 blue,” your password could be much easier to crack. Maybe you even talked about Oscar in a Facebook post. So again, do not use a pet’s name in your password. Then put something in for the reminder that has no relation to your password: “Blank” or “Poughkeepsie” for instance.
  4. Keep your master password someplace safe. Do not leave a copy in clear text on your phone or your computer or taped to your monitor. Put it in a locked drawer or better—your safe deposit box.
  5. Back up your password database periodically to a device you store offline, and printing the list and storing both the printout and the backup in a sealed envelope in your safe deposit box is a good idea as well.
  6. Use two-factor authentication. If you don’t know anything about it, this Google account article will explain it.

Let me tell you about children who are leading changes in a wide variety of areas including education, research on cancer and asthma, and even information security and privacy. It was eye-opening to me because many people—including me!—discount discoveries made by children because they are “too young” to add significant information to a dialog. What they could add—if we give them a chance—is a fresh perspective.

I recently had the opportunity to attend an information security keynote presentation given by Reuben Paul. I attend many security events every year, so that might not seem so unusual, except that this amazing young man is only nine years old. He gave his first information security presentation Infosec from the Mouth of Babes at the 2014 DerbyCon conference in Kentucky at the age of eight, and he has given many presentations since then. Here is his story. His father, Mano Paul, is an information security trainer and consultant.

Reuben’s talk at DerbyCon discussed three topics:

  1. Why should you teach kids about Information Security?
  2. How can you teach kids about Information Security?
  3. What can kids teach you about Information Security?

Reuben’s advice at DerbyCon? “[Parents and educators should] teach … kids to use [technology] safely and securely.”

Many grownups do not have the level of understanding of privacy and security that Reuben does. How did Reuben gain that understanding? Reuben credits his parents and his school for being supportive, but some credit belongs to Reuben. He imagined how children could participate in information security and privacy, and insisted on being heard. That takes, well, imagination as well as persistence.

Then I started looking at other amazing children. I found a section on TED Talks called “TED under 20.”

One of the first videos I saw was called Science is for everyone, kids included. The video tells the story of neuroscientist Beau Lotto working with a class of 25 eight- to ten-year-old children from Blackawton Primary School, Blackawton, Devon, UK. The children developed an experiment on training bees to choose flowers according to rules. Then the children wrote and submitted a paper, which was published by the Royal Society Biology Letters.

The paper is free to download and fun to read!

The conclusion the Blackawton Primary School children came to was that “Play enables humans (and other mammals) to discover (and create) relationships and patterns. When one adds rules to play, a game is created. This is science: the process of playing with rules that enables one to reveal previously unseen patterns of relationships that extend our collective understanding of nature and human nature.”

Jo Lunt, science teacher at Blackawton Primary School, said, “I think one of the biggest changes I’ve seen is the children’s approach to learning science. They don’t get so hung up or worried about getting the answer right. They think more about the journey they’re on and the learning they’re doing along the way.”

How I harnessed the wind, is the story of William Kamkwamba. Malawi, the country where he lived, experienced a drought in 2001. He and his family not only couldn’t pay for his schooling, they were all starving because their crops failed. He was determined to help his family find a solution for the drought. He found a book in the library with plans for a windmill. At the age of 14, he built his first windmill from scrap yard materials to pump water for crop irrigation and to create electricity.

Award-winning teenage science in action explains the projects of the three teenage girls who won the 2011 Google Science Fair. Lauren Hodge, age 13-14 category, conducted her research on how carcinogens formed while grilling chicken. Shree Bose’s project, the age 17-18 age category and grand prize winner, concentrated on reasons why cancer survivors developed resistance to chemotherapy. Naomi Shah, age 15-16 category, used a complex mathematical model to look at ways to improve air quality for asthmatics.

Children learn very rapidly, and since they have used technology all their lives, they will often master new skills with an ease that will take your breath away. Be the change, mentor change, and be willing to change. Be open to learning from anyone who can teach you!

Be the change … for information privacy! Part 2

Posted: April 20, 2015 by IntentionalPrivacy in Uncategorized

Continued from Part 1

Case: Identity theft

I first became aware of identity theft in 1996 when my credit card number was stolen. I had rented a car, and my application had every piece of information that they could dig out of me, including a copy of the front and back of my credit card and my driver’s license. I can picture the rental agent leaving my file on the desk in her cubicle while she went to the car lot to check out a car for her next clients, who were sitting at her desk. While she was gone, they copied my information. At the time, I lived only a couple of hours away from the Canadian border. When the credit card company called to alert me because the card usage was not in line with my card profile, they told me somebody in Canada used my card to buy “services” over the phone. The charge was only $75, but it took forever to get it off my bill: paperwork, registered letters, phone calls, time, and frustration.

Of course, the car rental company (a nationwide chain) denied everything, but it was the only place I had used my card where the information was out of my sight, and the thieves had details such as my address.

Outcome: I persisted and eventually the charge was removed from my statement.

Case: Confidential information sent in email

We used the same accountant for years. One year after we completed the intake for our taxes, I asked him when we could stop back to pick up the completed forms. He said that he would email them to us. I asked if they would be encrypted and he said yes. I nodded and we left. Three days later, I received an email with our taxes attached as a PDF.

I was livid. I called his office, and asked why he had sent our taxes in an unencrypted PDF through email. He told me they were encrypted.

What he meant was that he filed taxes with the IRS using SSL encryption. He said that his IT staff told him they could not encrypt email attachments.

Email is not a secure method of communication. While it may be transmitted using TLS encryption (Transport Layer Security protocol provides encryption for transmission between servers), when receiving email servers do not accept encrypted transfers, your email is sent in unencrypted plain text.

I tried to explain to him that he could even use a compression program like WinZip (which I knew he had) with a shared password. Nope. He would not listen. I sent him a letter where I carefully explained cryptography and options for using it that were inexpensive and not difficult to implement. Then I explained that because he would not consider them, I was ending our business relationship. That was very difficult because our families had been friends for years.

Outcome: I changed our tax accountant.

Case: My financial institution leaked my information when they changed the monthly statement format.

I use online bill payment to pay most of my bills. I always reconcile my paper statement against my checkbook register. A couple of years ago when I glanced through my statement, I saw to my horror that both my social security number and my credit card number were printed in the transaction section of the statement. I realized that one of my Sallie Mae loans was using my social security number as the account number when they returned payment information.

I called the bank immediately.

The customer service representative did not understand my concerns. When I hung up, I wrote a letter to each board member at my financial institution and sent copies to Sallie Mae and my credit card company. The credit card company did not understand either, but Sallie Mae stopped using my social security number as my account number (which they should not have been doing anyway, it’s against the law).

Think about how many people this could have affected! If a breach of the banking website occurred, if someone at the printer looked through the statements, if the statements got lost in transit, if someone had stolen mail from a mailbox … there are a number of scenarios where statements could have been used by the nefarious.

Outcome: Sallie Mae changed my account number, and on the very next statement, only the last four numbers of my credit card number printed on my statement.

Case: Doctor’s office wants to submit my information to a research project.

I made an appointment with a new physician. Part of the paperwork was a permission form to submit my information to a research project. There was no ending date for permission to stop, no information about the research project, and no information about how they de-identified data. I asked the receptionist to clarify some of these details for me, so she called the nurse in charge of the research project. The nurse said de-identified data means that you cannot be identified. I asked her what specific identifying information they used in the study. Instead of answering me (I am pretty sure she did not know), she told me not to sign the form as the study was not being conducted any more anyway. Which brought up another question in my mind: If they were not using my information for a study, why were they asking me if they could use it?

Unfortunately, it takes very little data to identify someone by comparing identity information to public records such as voter registration. One such study conducted at Harvard University by Professor Latanya Sweeney showed that 87% of the population can be uniquely identified through three variables—a patient’s birth date, zip code, and gender.

Outcome: My information is not being used in a research project.

Case: My employer changed insurance companies

For the last 2 years, I have worked as a contractor. My employer decided to switch health insurance carriers. The experience was very disorganized from several standpoints, but from an information privacy perspective, it was a nightmare. The insurance company they chose (not Anthem)—very large and very old—combines a social security number with a three-digit employer code into an account number used for signing up. I called and asked if there was an alternative method of signing up. No.

Against my better judgment—since I needed medical insurance—I decided to sign up. The sign-up website is hosted by a benefits administrator subcontractor. Their privacy policies were a mess, mixing up personal pronouns with collective nouns in several places. A company that is careless about privacy policies often has gaps in other parts of their infrastructure.

Curious about how they treated passwords, I tried using a four-character password. It worked! Of course I changed it to something more secure immediately.

I wrote my employer and the insurance company’s senior management about my concerns, and sent copies to the US Department of Labor. The insurance company response explained that they and their benefit administrator used industry-standard security measures.

Two weeks later, the Anthem breach happened. So much for industry standards.

Outcome: I discontinued insurance coverage.

Case: Conference attendee information thrown in a wastebasket

I volunteer at events a couple of times a year. I like to work the registration desk because I meet a wide variety of people. As we were packing up the registration desk, I saw a listing of conference attendees—name, email, employer, and phone number—in the trash. I plucked it out, and said to the registration coordinator that the list should not be in the trash.

She said she did not have a shredder. I took the list home and shredded it myself.

The next day, I discussed it with the conference coordinator.

Outcome: New procedures to shred confidential information were implemented

Ask questions. Speak up. Nobody cares more about your data than you do! If you see private information leaking, it is very important to point it out. If you do not want to take the time to do it for yourself, do it for your children and your grandchildren. Do it for your older family members. Do it for people who do not understand how important privacy is. Do it to protect your job.

The fallout from a breach affects customers (identity theft and raised prices), employees (lost jobs and closed stores), and stockholders.

Be the change you want to see!

Be the change … for information privacy! Part 1

Posted: April 20, 2015 by IntentionalPrivacy in Uncategorized

Personal information about us leaks every day in multiple ways.

A friend told me recently that he has no expectation of privacy, and that no one else should either. He thinks that a lack of privacy will affect each of the six generations (according to NPR) that are around today until we work out what information should be private and how to protect it:

  • The GI generation is anyone aged 90 or older; their probable privacy impact will be in the financial and medical information areas, or their identity could be stolen.
  • The Silent generation is between the ages 72 to 89; their probable privacy impact will be in the financial and medical information areas, or their identity could be stolen. The privacy impact could be greater if they are still working or using social media, email, or electronic banking.
  • Baby Boomers are those people between the ages 50 to 71, and they should think about the privacy of their information, especially if they still work. Many people in this generation use email, social media, and electronic banking. So tax returns, financial information, medical information, and other confidential information could be affected. Like every generation, they should protect themselves, their school-aged children, and elder family members against identity theft.
  • Generation Xers are between the ages 35 to 49 and they should definitely consider privacy issues; many are far too free with their information on social media and through email. Financial information, medical information, and other confidential information are just some of the areas that could be affected, but they also must consider privacy issues for their children and elder family members. Like every generation, they should protect themselves, their school-aged children, and elder family members against identity theft.
  • Millennials are between the ages 14 to 34; these people should definitely be concerned about the privacy of their information; many people in this age group are far too free with their information. Sometimes people in this age group even post photos of their credit card on Facebook (argh!). Financial and medical information, and other confidential information are just some of the areas that could be affected, but they also must consider privacy issues for their children. They should protect themselves and their school-aged children against identity theft.
  • Generation Z (also known as the iGeneration) are children between the ages one to 13. Children have to depend on the ability of other people to protect their information. For instance, some parents do not understand that they need to check their children’s credit ratings as well as their own. By the time a child has reached an age where he or she can take out credit, their identity could have been stolen and their credit ruined. Bad credit can affect a person’s ability to get a job, rent or buy a home, or buy a car.

Most people do not understand the need for information privacy (until it affects them) and many organizations—because they are made up of people—do not understand either.

So, what do you do when you realize that an organization is not protecting your private information? Explain to them the change you want to see. I start with a phone call to customer service and if I do not achieve my goal, then I write letters to executives and send copies to regulatory agencies. I may not achieve the results I wanted, but I let them know that if they cannot address my issue, I will choose to move (whenever possible) to a different organization that is more supportive of my needs.

Maybe the organization will not listen this time, but they may be more receptive for the next customer.

Part 2 delivers case histories.