Archive for the ‘Privacy’ Category

No security anywhere …

Posted: May 19, 2017 by IntentionalPrivacy in Conferences, Privacy, Theft, Vulnerabilities
Tags: , ,

I was at a conference yesterday. When I went to register, the computer system being used had a label with the username and password right next to the touchpad. There was a problem with my registration, so the conference sent me an email. It contained the names of three other people–unknown to me–at the conference.

Next, we went to the exhibits. The first trailer we went to was open and no one was there. On a table inside was an open, logged-in laptop and a cell phone. Who would have known if I had taken the laptop or phone, or worse, taken information from the laptop?

Pay attention to what you do. Always lock your laptop (press the Windows and L keys simultaneously) when you have to leave it with someone you trust and do not leave your belongings unattended in a vehicle, or at a conference, a restaurant, or a coffee shop.

Medical record theft is on the rise, and according to  Reuters ( http://www.reuters.com/article/us-cybersecurity-hospitals-idUSKCN0HJ21I20140924 ), a stolen medical record is worth ten times what a stolen credit card number on the black market. The reason medical records are worth so much more, is because they are used to steal benefits and commit identity theft and tax fraud.

How easy is it to steal medical records?

This morning, I read Brian Kreb’s report on True Health Diagnostics health portal, which allowed other patients’ medical test results to be read by changing one digit on the PDF link. The company—based in Frisco, Texas—immediately took the portal down and spent the weekend fixing it. https://krebsonsecurity.com/2017/05/website-flaw-let-true-health-diagnostics-users-view-all-medical-records/

While I think it is great they fixed the problem so rapidly, I am disgusted that our medical information is so often flapping in the breeze. Health professionals are notoriously lax about protecting their patients’ medical information. A security professional that I know defended medical people by saying they do not understand HIPAA/HITECH. Yes, I know they do not necessarily understand the technical details. But is ignorance an excuse? I do not think so. They have IT people to support those computers and medical professionals are supposed to attend HIPAA training on a regular basis.

For instance, upon reading the FAQs at http://www.holisticheal.com/faq-dna , I noticed that after a patient completes their tests (recommended by my doctor), this practitioner sent results in email. It is not a simple test like cholesterol; it contains information about someone’s DNA.

After I emailed them and told them I would not consider using their service because email is not secure unless encrypted and in my opinion this practice—sending medical results in unencrypted email—is contrary to HIPAA/HITECH, they changed their policy. While they now send the results for US patients on a computer disk through the mail, they still send international clients their results through email.

I have frequently caught my own medical professionals leaving their patient portals open when I am alone in the exam room or even away having tests. During one notable session, without touching the computer, I could see a list of all the patients being seen that day on the left, and the doctor’s schedule across the top (including 3 cancellations). Another medical professional texted me part of my treatment plan. (I thought we were limiting our text conversation to time, date, and location. Otherwise I never would have agreed to text. I had never even met this person!) Another provider grouped three receptionists with computers (no privacy screens) in a circle with windows on two sides. I could read two of the screens when signing in and the third when leaving and I saw them leave their screens open when they walked away from their computers so that the other receptionists can use those computers.

Granted, these incidents may not be breaches, but I think they are violations of HIPAA/HITECH and they could lead to breaches. What are the chances they are using appropriate access control, backing up their systems, encrypting their backups, thinking about third-party access? Are they vulnerable to phishing, crypto ransomware, hackers, employee malfeasance, someone’s child playing with the phone?

Yes, I get that people make mistakes. The problem is they have the ability to make mistakes! Set up fail safes. Require each employee’s phone to be physically encrypted and give them a way to send encrypted emails or texts or do not allow them to text or email patients. Make screens lock after five minutes or sooner. Give them training. Spot check what they’re doing.

I always discuss these issues when I notice them with the practice HIPAA Privacy Officer (and sometimes change medical providers if egregious). Does it help? Maybe. But it always makes me wonder what I have not seen.

Pay attention! Protecting your data helps protect everybody’s data.

A member of my family has recently been having some medical issues, and has been making the rounds of doctors and other medical practitioners. It is bad enough when someone doesn’t feel well, but what can make it worse? A medical professional being careless with our personal health information in spite of the medical privacy laws (HIPAA and HITECH). A visiting nurse called to make an appointment for a home visit, which turned into a SMS text dialogue. A question from the nurse left me speechless, “Have you received your {INSERT PRESCRIPTION BRAND NAME HERE} yet?”

Really? She really put part of the treatment plan in an unencrypted text message?

Text messaging by a medical professional should be limited to location and time of appointment.

I informed her that in my opinion putting a prescription name in an unencrypted text message was a violation of HIPAA, especially since the patient had never met the nurse or signed any HIPAA disclosures. She said she deleted the messages from her phone and gave me the name of her supervisor. I called the woman, who wasn’t available. I left a voice mail message, saying that I was concerned because putting treatment details in an unencrypted text message was a violation of HIPAA.

Strike two: A week later, no one from the nursing service has called me back.

I called the company that ordered the nursing service, explained what happened and asked that the service be cancelled. I took the patient to the doctor’s office—much less convenient—but a better option in this case. I was concerned that the nurse might be using a personal phone that did not have encryption on it, that she might have games installed (a common source of malware), that she did not use a pass code to lock her phone or that her phone did not automatically lock, or any of 100 different bad scenarios. What further concerned me is that I did not receive a call back from the nursing company. They are supposed to have a HIPAA Privacy Officer, who should have returned my call and explained what they were doing to protect the patient’s information in the future. At the very least, the nurse should have been required to re-take HIPAA Patient Privacy training (which is mandated to occur yearly anyway by the Office of Civil Rights).

Why is this such a big deal?

When you consider that your medical record is worth more to an identity thief than your credit card, it is a very big deal. A CNBC article published on March 11,2016, “Dark Web is fertile ground for stolen medical records,” stated:

While a Social Security number can be purchased on the dark Web for around $15, medical records fetch at least $60 per record because of that additional information, such as addresses, phone numbers and employment history. That in turn allows criminals to file fake tax returns.

Your credit card might be worth one or two dollars at most.

Another informative article, “Is Texting in Violation of HIPAA?,” appears in The HIPAA Journal.

If you feel that your medical privacy has been violated, you can file a complaint with the Office of Civil Rights.

I’m going to call the nursing service again on Monday and ask to speak with their HIPAA Privacy Officer and try to explain my concerns.

The Bottom Line: They lost a client!

Graham Cluley released an article today called “200 MILLION YAHOO PASSWORDS BEING SOLD ON THE DARK WEB?” about various web sites that have had stolen passwords recently posted on criminal web sites (the “dark web”).

While not really news—new password breeches are revealed quite often—but it brings some questions to mind. How do you know if your passwords have been stolen? And, what do you do about them?

If you haven’t changed your important passwords recently, you could just assume they have been stolen and change them.

Or, you can look up your email address or user name at a site like LeakedSource.com. When you put in a user name or email and click Search, it will show you possible accounts and the types of information contained in their databases for free, but not the actual information contained. You have to pay to see that.

Do you actually need to see those old passwords? Probably not; what you really need is the accounts that were compromised. If you look at those accounts and you have not changed your password in a while, here’s what to do:

  1. Install some kind of password manager on each of your devices, something well known, such as KeePass 2 or LastPass. Come up with a password for the manager that you will not forget. If you forget it, the password probably cannot be recovered (99.99% chance of no recovery). Keep a copy of the master password somewhere safe—your safe deposit box or even in your wallet if you need to. (Note: this may not protect you against family members or friends who want to know your secrets.) If your wallet gets stolen, you only have 1 password to change.

You can download those applications from the following sources. Note: Only download applications from the original site:

Personally, I prefer KeePass, but LastPass is much easier to synchronize between devices because it is web-based. LastPass has had recent vulnerabilities however.

The nice thing about a password manager is that it will autotype your password (unless the username and password are on separate pages, such as some bank accounts and credit card sites use). Even in those case you can drag your username and/or password to the proper place.

  1. Change your important passwords—email, Facebook, MySpace, LinkedIn (for example)—to something at least 15 characters long. Do not reuse it anywhere! A password safe will generate a password for you and you can customize length and character types.
  1. If the site offers some kind of multi-factor authentication (MFA), take advantage of it. Yes, it is painful! But you can often set it so that your devices will remember for at least 30 days (unless you clear your cache).
  1. Do not share your passwords with anyone! Not your spouse, kids, friends, boss, coworkers, or someone claiming to be from Microsoft support.
  1. Last, change your passwords at least yearly. A good day to change them? World Password Day at https://passwordday.org/ celebrates password security on May 5 every year. They have some funny videos starring Betty White! Check them out!

Save your information and your privacy. Practice safe MFA like Betty White!

Bleeding Data – South by Southwest workshop

Posted: August 30, 2015 by IntentionalPrivacy in First Steps, Personal safety, Privacy
Tags: ,

We put together a workshop proposal called “Bleeding Data: How to Stop Leaking Your Information” for SXSW Interactive. The workshop will help consumers understand data privacy issues. We will demonstrate some tools that are easy to use and free. Please create a login at SXSW and vote for our workshop! http://panelpicker.sxsw.com/vote/50060. Voting is open until September 10, 2015.

I get my hair cut at the local salon of a famous chain of beauty schools that stretches across the US. They are a subsidiary of a much larger, high-end beauty products conglomerate. I have gotten my hair cut at various locations for years. It’s a good value for the money, and the resulting hair cuts are at least as good as and often better than ones I have received at their full-price salons.

Friday, I called to schedule a haircut and a facial. The scheduler asked for my credit card number to reserve my appointment. I asked if this was a new policy. The scheduler said they only asked for a credit card number for services that had a large number of no-shows. I asked when my card was charged, and she tried valiantly to explain how it worked.

I declined to give her my card and asked her to set up an appointment only for the haircut.

The next day, when I went in for my hair cut, I asked for their written policy on storing credit card numbers:

  • How long is the card stored in their system?
  • Who has access to it and what can they see?
  • How and why is a transaction against my number authorized?
  • What other information are they storing with my credit card number? Name, address, phone number …
  • Are they using a third-party application or does a third party have access to my information?
  • Are they following the best practices (for example, encrypted databases and hashing card numbers) recommended by the Payment Card Security Standards Council, in particular, the Payment Application Data Security Standards, which are available from https://www.pcisecuritystandards.org/security_standards/index.php ?

The receptionist referred me to their call center, where I eventually spoke with a manager, who could not answer my questions. She promised to find out and email me the policy, which I have yet to see.

I mailed a letter to the executive chairman of the beauty products conglomerate and the manager of the local school. I am not going back unless they come up with a satisfactory policy. Any organization that stores credit card information should have a written policy that explains how they protect it, and it should be available on customer request. It is not only best practice from a Payment Card Industry point-of-view, but it avoids misunderstandings between customers, employees, and management.

I’ve been a customer for over 20 years. Privacy matters, data security matters, and if your organization doesn’t think enough of my business to adequately protect my information and be able to show me, I am going someplace that will. No matter how much I like your hair cuts.

I had an interesting experience last week (my life seems to be full of them!). I signed up to take a class that purported to give me a better understanding of what I was looking for in a career.

The first day of class the instructor gave us the URL for an application that he had developed to collect a considerable amount of information about each of us: likes, desires, Myers-Briggs profile, and results from other assessment tests. During the class break, I asked him why the application was not using HTTPS. He said it did, but it used a referrer. I looked at the code of the web site. Hmm, not that I could see.

When I got home, I loaded up Wireshark so I could watch the interaction of the packets with the application. The application definitely did not use HTTPS. I emailed the instructor. Oh, he said, there was a mistake in the documentation, and he gave me the “real” secure URL.

Ok, so this application is sending his clients’ first and last names, email addresses, passwords, and usernames in clear text across the Internet. Not a big deal, you say?

It is a big deal, because many people use the same usernames and passwords on their accounts around the Web. Then add in their email address and their personal information is owned by anyone sniffing packets on any unsecured network they might be using, such as an unsecured wireless network in a coffee shop, an apartment building, a dorm room ….

So, next—because I now had their “secure” website URL—I checked their website against http://www.netcraft.com/, https://www.ssllabs.com/ssltest/, and some other sites—all public information. According to these tests, the application was running Apache version 2.2.22, which was released on January 31, 2012, WordPress 3.6.1 (released on September 11, 2013), as well as PHP 5.2.17 (released on January 6, 2011). It is never a good idea to run old software versions, but old WordPress versions are notoriously insecure.

Please note: I am not recommending either of these websites or their products; I merely used them as a method to find information about the application I was examining.

Not only that, but the app used SSL2 and SSL3, so the encryption technology is archaic. Qualys SSL Labs gave the app an “F” for their encryption, and that was after he gave me the HTTPS address.

(“It was harder to implement the security than we thought it would be,” he said.)

Although I did not find out the Linux version running on the web server, based on my previous findings—which I confirmed with the application owner—I would be willing to bet that the operating system was also not current.

So, then I tried creating a profile. I made up first and last names, user name, and a test email from example.net (https://www.advomatic.com/blog/what-to-use-for-test-email-addresses). I tried “test” for a password, which worked. So, the app does not test for password complexity or length.

He asked me on the second day of class if I now felt more comfortable about entering my information in his application since it was using HTTPS. I said no; I said that his application was so insecure that it was embarrassing, that it appeared to me that they had completely disregarded any considerations about securely coding an application.

He said that they never considered the necessity of securing someone’s information because they were not collecting credit card information.

I said that with the amount of data they collected, a thief could impersonate someone easily. I reminded him that some people use the same usernames and passwords for several accounts, and with that information and an email account, any hijacker was in business.

Then he said that he was depending on someone he trusted to write the code securely.

Although I believe in trust, if it were my application, I would verify any claims of security.

I told him he was lucky someone had not hacked his website to serve up malware. I said that I was not an application penetration tester, but that I could hack his website and own his database in less than 24 hours. I said the only reason it would take me that long is because I would have to read up on how to do it.

I told him I would never feel comfortable entering my information in his application because of the breach of trust between his application and his users. I said that while most people would not care even if I explained why they should care, I have to care. It is my job. If my information was stolen because I entered it in an application that I knew was insecure, I could never work in information security again.

So, what should you look for before you enter your information in an application?

  1. Does the web site use HTTPS? HTTPS stands for Hypertext Transfer Protocol Secure; what that means is that the connection between you and the server is encrypted. If you cannot tell because the HTTPS part of the address is not showing, copy the web address into Notepad or Word, and look for HTTPS at the beginning of the address.
  2. Netcraft.com –  gives some basic information about the website you’re checking. You do not need to install their toolbar, just put the website name into the box below “What’s that site running?” about midway down the right-hand side.
  3.  Qualys SSL Labs tests the encryption (often known as SSL) configuration of a web server. I do not put my information in any web site that is not at least a “C.”
  4. Another thing you should be concerned about is a site that serves up malware: Here are some sites that check for malware:

http://google.com/safebrowsing/diagnostic?site=<site name here>

http://hosts-file.net/ — be sure to read their site classifications here

http://safeweb.norton.com/

  1. Do not enter any personal information in a site when using an insecure Wi-Fi connection, such as at a coffee shop or a hotel, just in case the site doesn’t have everything secured on its pages.

The Electronic Frontier Foundation (EFF) recently released a plug-in for Chrome and Firefox called Privacy Badger 1.0. A plug-in is a software module, which adds functionality, that can be loaded into a browser. What the Badger plug-in does is block trackers from spying on the web pages you visit.

Why should you care? Because Big Data companies track everything you do online, and what do they do with that data? One thing they do is analyze data to predict consumer behavior. Here are a couple of articles that explain some of the issues: “The Murky World of Third-Party Tracking” is a short overview, while the EFF has a three-part article called “How Online Tracking Companies Know Most of What You Do Online (and What Social Networks Are Doing to Help Them)” that while several years old, is very detailed.

The FTC has gotten involved as well. Here is a link to one of their papers called “Big Data: A Tool for Inclusion or Exclusion?

I loaded the Badger plug-in as soon as it came out, and I am amazed at the number of trackers it blocks (it does allow a few)! One CNN.com page I visited had over a hundred trackers blocked and a Huffington Post page had almost as many. I also run other plug-ins in Firefox (Ghostery, NoScript, AdBlock Plus, Lightbeam).

The Badger icon in the upper right-hand corner tells you how many are blocked.

The best thing about Badger is that it is very easy to use, unlike NoScript.

Give it a try, and let me know what you think.

Part 1 explains why you might decide to use secure messaging.

If you decide you want to use a secure messaging app, here are some factors you might consider:

  • How secure is the program? Does it send your messages in plaintext or does it encrypt your communications?
  • How user friendly is it?
  • How many people overall use it? A good rule for security and privacy: do not be an early adapter! Let somebody else work the bugs out. The number of users should be at least several thousand.
  • What do users say about using it? Make sure you read both positive and negative comments. Test drive it before you trust it.
  • How many people do you know who use it? Could you persuade your family and friends to use it?
  • How much does it cost?
  • What happens to the message if the receiver is not using the same program as the sender?
    • Does it notify you first and offer other message delivery options or does the message encryption fail?
    • For those cases where the encryption fails, does the message not get sent or is it sent and stored unencrypted on the other end?
  • Will it work on other platforms besides yours? Android, iOS, Blackberry, Windows, etc.
  • Does the app include an anonymizer, such as Tor?
  • While the app itself may not cost, consider whether the messages will be sent using data or SMS? Will it cost you money from that standpoint?

The Electronic Freedom Foundation recently published an article called “The Secure Messaging Scorecard” that might help you find an app that meets your needs. Here are a few of the protocols used by the applications listed in the article:

I picked out a few apps that met all of their parameters, and put together some notes on cost, protocols, and platforms. While I have not used any of them, I am looking forward to testing them, and will let you know how it goes.

 

App Name Cost Platforms Protocol Notes
ChatSecure + Orbot Free; open source; GitHub iOS, Android OTR, XMPP, Tor, SQLCipher
CryptoCat Free; open source; GitHub Firefox, Chrome, Safari, Opera, OS X, iPhone; Facebook Messsenger OTR – single conversations; XMPP – group conversations Group chat, file sharing; not anonymous
Off-The-Record Messaging for Windows (Pidgin) Free Windows, GNOME2, KDE 3, KDE 4 OTR, XMPP, file transfer protocols
Off-The-Record Messaging for Mac (Adium) Free Adium 1.5 or later runs on Mac OS X 10.6.8 or newer OTR, XMPP, file transfer protocols No recent code audit
Signal (iPhone) / RedPhone (Android) Free iPhone, Android, and the browser ZTRP
Silent Phone / Silent Text https://silentcircle.com/pricing Desktop: Windows ZRTP, SCIMP Used for calling, texting, video chatting, or sending files
Telegram (secret chats) Free Android, iPhone / iPad, Windows Phone, Web- version, OS X (10.7 up), Windows/Mac/Linux Mproto Cloud-based; runs a cracking contest periodically
TextSecure Free Android Curve25519, AES-256, HMAC-SHA256.

Sources
http://en.flossmanuals.net/basic-internet-security/ch048_tools-secure-textmessaging/
http://security.stackexchange.com/questions/11493/how-hard-is-it-to-intercept-sms-two-factor-authentication
http://www.bbc.co.uk/news/technology-16812064
http://www.practiceunite.com/notifications-the-3-factor-in-choosing-a-secure-texting-solution/
http://www.tomsguide.com/us/iphone-jailbreak-risks,news-18850.html

When you send a message, who controls your messages? You write them and you get them, but what happens in the middle? Where are they stored? Who can read them? Email, texts, instant messaging and Internet relay chat (IRC), videos, photos, and (of course) phone calls all require software. Those programs are loaded on your phone or your tablet by the device manufacturer and the service provider. However, you can choose to use other – more secure – programs.

In the old days of the 20th century, a landline telephone call (or a fax) was an example of point-to-point service. Except for wiretaps or party lines, or situations where you might be overheard or the fax intercepted, that type of messaging was reasonably secure. Today, messaging does not usually go from your device—whether it is a cell phone, laptop, computer, or tablet—directly to the receiver’s device. Landlines are becoming scarcer, as digital phones using Voice over IP (VoIP) are becoming more prevalent. Messages are just like any other Internet activities: something (or someone) is in the middle.

It’s a lot like the days when an operator was necessary to connect your call. You are never really sure if someone is listening to your message.

What that means is that a digital message is not be secure without taking extra precautions. It may go directly from your device to your provider’s network or it may be forwarded from another network; it often depends on where you are located in relation to a cell phone tower and how busy it is. Once the message has reached your provider’s network, it may bounce to a couple of locations on their network, and then—depending on whether your friend is a subscriber of the same provider—the message may stay on the same network or it may hop to another provider’s network, where it will be stored on their servers, and then finally be delivered to the recipient.

Understand that data has different states and how the data is treated may be different depending on the state. Data can be encrypted when it is transmitted and it can be encrypted when it is stored, or it can remain unencrypted in either state.

Everywhere it stops on the path from your device to the destination, the message is stored. The length of time it is kept in storage depends on the provider’s procedures, and it could be kept for weeks or even years. It gets backed up and it may be sent to offsite storage. At any time along its travels, it can be lost, stolen, intercepted or subpoenaed. If the message itself is encrypted, it cannot be read without access to the key. If the application is your provider’s, they may have access to the message even if it is encrypted if they have access to the key.

Is the message sent over an encrypted channel or is it sent in plain text? If you are sending pictures of LOLZ cats, who cares? But if you are discussing, say, a work-related topic, or a medical or any other confidential issue, you might not want your messages available on the open air. In fact, it’s better for you and your employer if you keep your work and personal information separated on your devices. This can happen by carrying a device strictly for work or maybe through a Mobile Device Management application your employer installed that is a container for your employer’s information. If you do not keep your information separate and your job suddenly comes to an end, they may have the right to wipe your personal device or you may not be able to retrieve any personal information stored on a work phone. Those policies you barely glanced at before you signed them when you started working at XYZ Corporation? It is a good idea to review them at least once a year and have a contingency plan! I have heard horror stories about baby pictures and novels that were lost forever after a job change.

Are you paranoid yet? If not, I have not explained this very well!

A messaging app that uses encryption can protect your communications with the following disclaimers. These apps cannot protect you against a key logger or malware designed to intercept your communications. They cannot protect you if someone has physical or root access to your phone. That is one of the reasons that jail-breaking your phone is such a bad idea—you are breaking your phone’s built-in security protections.

An app also cannot protect you against leaks by someone you trusted with your information. Remember: If you do not want the files or the texts you send to be leaked by someone else, do not send the information.

If you decide that you want to try one or more messaging applications, it is really important to read the documentation thoroughly so you understand what the app does and what it does not do and how to use it correctly. And, finally: Do not forget your passphrase!! Using a password manager such as KeePass or LastPass is a necessity today. Also back up your passwords regularly and put a copy—digital and/or paper—of any passwords you cannot afford to lose in a safe deposit box or cloud storage. If you decide to use cloud storage, make sure you encrypt the file before you upload it. Cloud storage is a term that means you are storing your stuff on someone else’s computer.

Part 2