Bleeding Data – South by Southwest workshop

Posted: August 30, 2015 by IntentionalPrivacy in First Steps, Personal safety, Privacy
Tags: ,

We put together a workshop proposal called “Bleeding Data: How to Stop Leaking Your Information” for SXSW Interactive. The workshop will help consumers understand data privacy issues. We will demonstrate some tools that are easy to use and free. Please create a login at SXSW and vote for our workshop! Voting is open until September 10, 2015.

I get my hair cut at the local salon of a famous chain of beauty schools that stretches across the US. They are a subsidiary of a much larger, high-end beauty products conglomerate. I have gotten my hair cut at various locations for years. It’s a good value for the money, and the resulting hair cuts are at least as good as and often better than ones I have received at their full-price salons.

Friday, I called to schedule a haircut and a facial. The scheduler asked for my credit card number to reserve my appointment. I asked if this was a new policy. The scheduler said they only asked for a credit card number for services that had a large number of no-shows. I asked when my card was charged, and she tried valiantly to explain how it worked.

I declined to give her my card and asked her to set up an appointment only for the haircut.

The next day, when I went in for my hair cut, I asked for their written policy on storing credit card numbers:

  • How long is the card stored in their system?
  • Who has access to it and what can they see?
  • How and why is a transaction against my number authorized?
  • What other information are they storing with my credit card number? Name, address, phone number …
  • Are they using a third-party application or does a third party have access to my information?
  • Are they following the best practices (for example, encrypted databases and hashing card numbers) recommended by the Payment Card Security Standards Council, in particular, the Payment Application Data Security Standards, which are available from ?

The receptionist referred me to their call center, where I eventually spoke with a manager, who could not answer my questions. She promised to find out and email me the policy, which I have yet to see.

I mailed a letter to the executive chairman of the beauty products conglomerate and the manager of the local school. I am not going back unless they come up with a satisfactory policy. Any organization that stores credit card information should have a written policy that explains how they protect it, and it should be available on customer request. It is not only best practice from a Payment Card Industry point-of-view, but it avoids misunderstandings between customers, employees, and management.

I’ve been a customer for over 20 years. Privacy matters, data security matters, and if your organization doesn’t think enough of my business to adequately protect my information and be able to show me, I am going someplace that will. No matter how much I like your hair cuts.

I had an interesting experience last week (my life seems to be full of them!). I signed up to take a class that purported to give me a better understanding of what I was looking for in a career.

The first day of class the instructor gave us the URL for an application that he had developed to collect a considerable amount of information about each of us: likes, desires, Myers-Briggs profile, and results from other assessment tests. During the class break, I asked him why the application was not using HTTPS. He said it did, but it used a referrer. I looked at the code of the web site. Hmm, not that I could see.

When I got home, I loaded up Wireshark so I could watch the interaction of the packets with the application. The application definitely did not use HTTPS. I emailed the instructor. Oh, he said, there was a mistake in the documentation, and he gave me the “real” secure URL.

Ok, so this application is sending his clients’ first and last names, email addresses, passwords, and usernames in clear text across the Internet. Not a big deal, you say?

It is a big deal, because many people use the same usernames and passwords on their accounts around the Web. Then add in their email address and their personal information is owned by anyone sniffing packets on any unsecured network they might be using, such as an unsecured wireless network in a coffee shop, an apartment building, a dorm room ….

So, next—because I now had their “secure” website URL—I checked their website against,, and some other sites—all public information. According to these tests, the application was running Apache version 2.2.22, which was released on January 31, 2012, WordPress 3.6.1 (released on September 11, 2013), as well as PHP 5.2.17 (released on January 6, 2011). It is never a good idea to run old software versions, but old WordPress versions are notoriously insecure.

Please note: I am not recommending either of these websites or their products; I merely used them as a method to find information about the application I was examining.

Not only that, but the app used SSL2 and SSL3, so the encryption technology is archaic. Qualys SSL Labs gave the app an “F” for their encryption, and that was after he gave me the HTTPS address.

(“It was harder to implement the security than we thought it would be,” he said.)

Although I did not find out the Linux version running on the web server, based on my previous findings—which I confirmed with the application owner—I would be willing to bet that the operating system was also not current.

So, then I tried creating a profile. I made up first and last names, user name, and a test email from ( I tried “test” for a password, which worked. So, the app does not test for password complexity or length.

He asked me on the second day of class if I now felt more comfortable about entering my information in his application since it was using HTTPS. I said no; I said that his application was so insecure that it was embarrassing, that it appeared to me that they had completely disregarded any considerations about securely coding an application.

He said that they never considered the necessity of securing someone’s information because they were not collecting credit card information.

I said that with the amount of data they collected, a thief could impersonate someone easily. I reminded him that some people use the same usernames and passwords for several accounts, and with that information and an email account, any hijacker was in business.

Then he said that he was depending on someone he trusted to write the code securely.

Although I believe in trust, if it were my application, I would verify any claims of security.

I told him he was lucky someone had not hacked his website to serve up malware. I said that I was not an application penetration tester, but that I could hack his website and own his database in less than 24 hours. I said the only reason it would take me that long is because I would have to read up on how to do it.

I told him I would never feel comfortable entering my information in his application because of the breach of trust between his application and his users. I said that while most people would not care even if I explained why they should care, I have to care. It is my job. If my information was stolen because I entered it in an application that I knew was insecure, I could never work in information security again.

So, what should you look for before you enter your information in an application?

  1. Does the web site use HTTPS? HTTPS stands for Hypertext Transfer Protocol Secure; what that means is that the connection between you and the server is encrypted. If you cannot tell because the HTTPS part of the address is not showing, copy the web address into Notepad or Word, and look for HTTPS at the beginning of the address.
  2. –  gives some basic information about the website you’re checking. You do not need to install their toolbar, just put the website name into the box below “What’s that site running?” about midway down the right-hand side.
  3.  Qualys SSL Labs tests the encryption (often known as SSL) configuration of a web server. I do not put my information in any web site that is not at least a “C.”
  4. Another thing you should be concerned about is a site that serves up malware: Here are some sites that check for malware:<site name here> — be sure to read their site classifications here

  1. Do not enter any personal information in a site when using an insecure Wi-Fi connection, such as at a coffee shop or a hotel, just in case the site doesn’t have everything secured on its pages.

The Electronic Frontier Foundation (EFF) recently released a plug-in for Chrome and Firefox called Privacy Badger 1.0. A plug-in is a software module, which adds functionality, that can be loaded into a browser. What the Badger plug-in does is block trackers from spying on the web pages you visit.

Why should you care? Because Big Data companies track everything you do online, and what do they do with that data? One thing they do is analyze data to predict consumer behavior. Here are a couple of articles that explain some of the issues: “The Murky World of Third-Party Tracking” is a short overview, while the EFF has a three-part article called “How Online Tracking Companies Know Most of What You Do Online (and What Social Networks Are Doing to Help Them)” that while several years old, is very detailed.

The FTC has gotten involved as well. Here is a link to one of their papers called “Big Data: A Tool for Inclusion or Exclusion?

I loaded the Badger plug-in as soon as it came out, and I am amazed at the number of trackers it blocks (it does allow a few)! One page I visited had over a hundred trackers blocked and a Huffington Post page had almost as many. I also run other plug-ins in Firefox (Ghostery, NoScript, AdBlock Plus, Lightbeam).

The Badger icon in the upper right-hand corner tells you how many are blocked.

The best thing about Badger is that it is very easy to use, unlike NoScript.

Give it a try, and let me know what you think.

As I do almost every day, I was looking through security news this morning. An article by Graham Cluley about a security issue—CERT CVE-2015-2865 —with the SwiftKey keyboard on Samsung Galaxy phones caught my eye. The security issue with the keyboard is because it updates itself automatically over an unencrypted HTTP connection instead of over HTTPS and does not verify the downloaded update. It cannot be uninstalled or disabled or replaced with a safer version from the Google Play store. Even if it is not the default keyboard on your phone, successful exploitation of this issue could allow a remote attacker to access your camera, microphone, GPS, install malware, or spy on you.

Samsung provided a firmware patch early this year to affected cell phone service providers.

What to do: Check with your cell phone service provider to see if the patch has been applied to your phone. I talked to Verizon this morning, and my phone does have the patch. Do not attach your phone an insecure Wi-Fi connection until you are sure you have the patch—which is not a good idea anyway.


An interesting article in Atlantic Monthly discusses purging data in online government and corporate (think insurance or Google) databases when it is two years old, since they cannot keep these online databases secure. I can see their point, but some of that information may actually be useful or even needed after two years. For instance, I would prefer that background checks were kept for longer than two years, although I would certainly like the information they contain to be secured.

Maybe archiving is a better idea instead of purging. It is interesting option, and it certainly deserves more thought.


Lastly, LastPass: I highly recommend password managers. I tried LastPass and it was not for me. I do not like the idea of storing my sensitive information in the cloud (for “cloud” think “someone else’s computer”), but it is very convenient. Most of the time, you achieve convenience by giving up some part of security.

LastPass announced a breach on Monday –not their first. They said that “LastPass account email addresses, password reminders, server per user salts, and authentication hashes were compromised.”

For mitigation: They have told their user community that they will require verification when a user logs in from a new device or IP address. In addition,

  1. You should change your master password, particularly if you have a weak password. If you used your master password on other sites, you should change those passwords as well.
  2. To make a strong password, make it long and strong. It should be at least 15 characters—longer is better—contain upper- and lowercase letters, digits, and symbols. It should not contain family, pet, or friend names, hobby or sports references,  birthdates, wedding anniversaries, or topics you blog about. Passphrases are a good idea, and you can make them even more secure by taking the first letter of each word of a long phrase that you will remember. For example:

    I love the Wizard of Oz! It was my favorite movie when I was a child.


    IltWoO! IwmfmwIwac$

    Everywhere a letter is used a second time, substitute a numeral or symbol, and it will be difficult to crack:

    IltWo0! 1>mf3wi<@c$

  3. When you create a LastPass master password, it will ask you to create a reminder. Let’s say you took your childhood dog’s name, added the number “42,” and the color “blue” because he had a blue collar to make your new master password: osC@R-forty2-Blew! If your reminder is “dog 42 blue,” your password could be much easier to crack. Maybe you even talked about Oscar in a Facebook post. So again, do not use a pet’s name in your password. Then put something in for the reminder that has no relation to your password: “Blank” or “Poughkeepsie” for instance.
  4. Keep your master password someplace safe. Do not leave a copy in clear text on your phone or your computer or taped to your monitor. Put it in a locked drawer or better—your safe deposit box.
  5. Back up your password database periodically to a device you store offline, and printing the list and storing both the printout and the backup in a sealed envelope in your safe deposit box is a good idea as well.
  6. Use two-factor authentication. If you don’t know anything about it, this Google account article will explain it.

Let me tell you about children who are leading changes in a wide variety of areas including education, research on cancer and asthma, and even information security and privacy. It was eye-opening to me because many people—including me!—discount discoveries made by children because they are “too young” to add significant information to a dialog. What they could add—if we give them a chance—is a fresh perspective.

I recently had the opportunity to attend an information security keynote presentation given by Reuben Paul. I attend many security events every year, so that might not seem so unusual, except that this amazing young man is only nine years old. He gave his first information security presentation Infosec from the Mouth of Babes at the 2014 DerbyCon conference in Kentucky at the age of eight, and he has given many presentations since then. Here is his story. His father, Mano Paul, is an information security trainer and consultant.

Reuben’s talk at DerbyCon discussed three topics:

  1. Why should you teach kids about Information Security?
  2. How can you teach kids about Information Security?
  3. What can kids teach you about Information Security?

Reuben’s advice at DerbyCon? “[Parents and educators should] teach … kids to use [technology] safely and securely.”

Many grownups do not have the level of understanding of privacy and security that Reuben does. How did Reuben gain that understanding? Reuben credits his parents and his school for being supportive, but some credit belongs to Reuben. He imagined how children could participate in information security and privacy, and insisted on being heard. That takes, well, imagination as well as persistence.

Then I started looking at other amazing children. I found a section on TED Talks called “TED under 20.”

One of the first videos I saw was called Science is for everyone, kids included. The video tells the story of neuroscientist Beau Lotto working with a class of 25 eight- to ten-year-old children from Blackawton Primary School, Blackawton, Devon, UK. The children developed an experiment on training bees to choose flowers according to rules. Then the children wrote and submitted a paper, which was published by the Royal Society Biology Letters.

The paper is free to download and fun to read!

The conclusion the Blackawton Primary School children came to was that “Play enables humans (and other mammals) to discover (and create) relationships and patterns. When one adds rules to play, a game is created. This is science: the process of playing with rules that enables one to reveal previously unseen patterns of relationships that extend our collective understanding of nature and human nature.”

Jo Lunt, science teacher at Blackawton Primary School, said, “I think one of the biggest changes I’ve seen is the children’s approach to learning science. They don’t get so hung up or worried about getting the answer right. They think more about the journey they’re on and the learning they’re doing along the way.”

How I harnessed the wind, is the story of William Kamkwamba. Malawi, the country where he lived, experienced a drought in 2001. He and his family not only couldn’t pay for his schooling, they were all starving because their crops failed. He was determined to help his family find a solution for the drought. He found a book in the library with plans for a windmill. At the age of 14, he built his first windmill from scrap yard materials to pump water for crop irrigation and to create electricity.

Award-winning teenage science in action explains the projects of the three teenage girls who won the 2011 Google Science Fair. Lauren Hodge, age 13-14 category, conducted her research on how carcinogens formed while grilling chicken. Shree Bose’s project, the age 17-18 age category and grand prize winner, concentrated on reasons why cancer survivors developed resistance to chemotherapy. Naomi Shah, age 15-16 category, used a complex mathematical model to look at ways to improve air quality for asthmatics.

Children learn very rapidly, and since they have used technology all their lives, they will often master new skills with an ease that will take your breath away. Be the change, mentor change, and be willing to change. Be open to learning from anyone who can teach you!

Be the change … for information privacy! Part 2

Posted: April 20, 2015 by IntentionalPrivacy in Uncategorized

Continued from Part 1

Case: Identity theft

I first became aware of identity theft in 1996 when my credit card number was stolen. I had rented a car, and my application had every piece of information that they could dig out of me, including a copy of the front and back of my credit card and my driver’s license. I can picture the rental agent leaving my file on the desk in her cubicle while she went to the car lot to check out a car for her next clients, who were sitting at her desk. While she was gone, they copied my information. At the time, I lived only a couple of hours away from the Canadian border. When the credit card company called to alert me because the card usage was not in line with my card profile, they told me somebody in Canada used my card to buy “services” over the phone. The charge was only $75, but it took forever to get it off my bill: paperwork, registered letters, phone calls, time, and frustration.

Of course, the car rental company (a nationwide chain) denied everything, but it was the only place I had used my card where the information was out of my sight, and the thieves had details such as my address.

Outcome: I persisted and eventually the charge was removed from my statement.

Case: Confidential information sent in email

We used the same accountant for years. One year after we completed the intake for our taxes, I asked him when we could stop back to pick up the completed forms. He said that he would email them to us. I asked if they would be encrypted and he said yes. I nodded and we left. Three days later, I received an email with our taxes attached as a PDF.

I was livid. I called his office, and asked why he had sent our taxes in an unencrypted PDF through email. He told me they were encrypted.

What he meant was that he filed taxes with the IRS using SSL encryption. He said that his IT staff told him they could not encrypt email attachments.

Email is not a secure method of communication. While it may be transmitted using TLS encryption (Transport Layer Security protocol provides encryption for transmission between servers), when receiving email servers do not accept encrypted transfers, your email is sent in unencrypted plain text.

I tried to explain to him that he could even use a compression program like WinZip (which I knew he had) with a shared password. Nope. He would not listen. I sent him a letter where I carefully explained cryptography and options for using it that were inexpensive and not difficult to implement. Then I explained that because he would not consider them, I was ending our business relationship. That was very difficult because our families had been friends for years.

Outcome: I changed our tax accountant.

Case: My financial institution leaked my information when they changed the monthly statement format.

I use online bill payment to pay most of my bills. I always reconcile my paper statement against my checkbook register. A couple of years ago when I glanced through my statement, I saw to my horror that both my social security number and my credit card number were printed in the transaction section of the statement. I realized that one of my Sallie Mae loans was using my social security number as the account number when they returned payment information.

I called the bank immediately.

The customer service representative did not understand my concerns. When I hung up, I wrote a letter to each board member at my financial institution and sent copies to Sallie Mae and my credit card company. The credit card company did not understand either, but Sallie Mae stopped using my social security number as my account number (which they should not have been doing anyway, it’s against the law).

Think about how many people this could have affected! If a breach of the banking website occurred, if someone at the printer looked through the statements, if the statements got lost in transit, if someone had stolen mail from a mailbox … there are a number of scenarios where statements could have been used by the nefarious.

Outcome: Sallie Mae changed my account number, and on the very next statement, only the last four numbers of my credit card number printed on my statement.

Case: Doctor’s office wants to submit my information to a research project.

I made an appointment with a new physician. Part of the paperwork was a permission form to submit my information to a research project. There was no ending date for permission to stop, no information about the research project, and no information about how they de-identified data. I asked the receptionist to clarify some of these details for me, so she called the nurse in charge of the research project. The nurse said de-identified data means that you cannot be identified. I asked her what specific identifying information they used in the study. Instead of answering me (I am pretty sure she did not know), she told me not to sign the form as the study was not being conducted any more anyway. Which brought up another question in my mind: If they were not using my information for a study, why were they asking me if they could use it?

Unfortunately, it takes very little data to identify someone by comparing identity information to public records such as voter registration. One such study conducted at Harvard University by Professor Latanya Sweeney showed that 87% of the population can be uniquely identified through three variables—a patient’s birth date, zip code, and gender.

Outcome: My information is not being used in a research project.

Case: My employer changed insurance companies

For the last 2 years, I have worked as a contractor. My employer decided to switch health insurance carriers. The experience was very disorganized from several standpoints, but from an information privacy perspective, it was a nightmare. The insurance company they chose (not Anthem)—very large and very old—combines a social security number with a three-digit employer code into an account number used for signing up. I called and asked if there was an alternative method of signing up. No.

Against my better judgment—since I needed medical insurance—I decided to sign up. The sign-up website is hosted by a benefits administrator subcontractor. Their privacy policies were a mess, mixing up personal pronouns with collective nouns in several places. A company that is careless about privacy policies often has gaps in other parts of their infrastructure.

Curious about how they treated passwords, I tried using a four-character password. It worked! Of course I changed it to something more secure immediately.

I wrote my employer and the insurance company’s senior management about my concerns, and sent copies to the US Department of Labor. The insurance company response explained that they and their benefit administrator used industry-standard security measures.

Two weeks later, the Anthem breach happened. So much for industry standards.

Outcome: I discontinued insurance coverage.

Case: Conference attendee information thrown in a wastebasket

I volunteer at events a couple of times a year. I like to work the registration desk because I meet a wide variety of people. As we were packing up the registration desk, I saw a listing of conference attendees—name, email, employer, and phone number—in the trash. I plucked it out, and said to the registration coordinator that the list should not be in the trash.

She said she did not have a shredder. I took the list home and shredded it myself.

The next day, I discussed it with the conference coordinator.

Outcome: New procedures to shred confidential information were implemented

Ask questions. Speak up. Nobody cares more about your data than you do! If you see private information leaking, it is very important to point it out. If you do not want to take the time to do it for yourself, do it for your children and your grandchildren. Do it for your older family members. Do it for people who do not understand how important privacy is. Do it to protect your job.

The fallout from a breach affects customers (identity theft and raised prices), employees (lost jobs and closed stores), and stockholders.

Be the change you want to see!

Be the change … for information privacy! Part 1

Posted: April 20, 2015 by IntentionalPrivacy in Uncategorized

Personal information about us leaks every day in multiple ways.

A friend told me recently that he has no expectation of privacy, and that no one else should either. He thinks that a lack of privacy will affect each of the six generations (according to NPR) that are around today until we work out what information should be private and how to protect it:

  • The GI generation is anyone aged 90 or older; their probable privacy impact will be in the financial and medical information areas, or their identity could be stolen.
  • The Silent generation is between the ages 72 to 89; their probable privacy impact will be in the financial and medical information areas, or their identity could be stolen. The privacy impact could be greater if they are still working or using social media, email, or electronic banking.
  • Baby Boomers are those people between the ages 50 to 71, and they should think about the privacy of their information, especially if they still work. Many people in this generation use email, social media, and electronic banking. So tax returns, financial information, medical information, and other confidential information could be affected. Like every generation, they should protect themselves, their school-aged children, and elder family members against identity theft.
  • Generation Xers are between the ages 35 to 49 and they should definitely consider privacy issues; many are far too free with their information on social media and through email. Financial information, medical information, and other confidential information are just some of the areas that could be affected, but they also must consider privacy issues for their children and elder family members. Like every generation, they should protect themselves, their school-aged children, and elder family members against identity theft.
  • Millennials are between the ages 14 to 34; these people should definitely be concerned about the privacy of their information; many people in this age group are far too free with their information. Sometimes people in this age group even post photos of their credit card on Facebook (argh!). Financial and medical information, and other confidential information are just some of the areas that could be affected, but they also must consider privacy issues for their children. They should protect themselves and their school-aged children against identity theft.
  • Generation Z (also known as the iGeneration) are children between the ages one to 13. Children have to depend on the ability of other people to protect their information. For instance, some parents do not understand that they need to check their children’s credit ratings as well as their own. By the time a child has reached an age where he or she can take out credit, their identity could have been stolen and their credit ruined. Bad credit can affect a person’s ability to get a job, rent or buy a home, or buy a car.

Most people do not understand the need for information privacy (until it affects them) and many organizations—because they are made up of people—do not understand either.

So, what do you do when you realize that an organization is not protecting your private information? Explain to them the change you want to see. I start with a phone call to customer service and if I do not achieve my goal, then I write letters to executives and send copies to regulatory agencies. I may not achieve the results I wanted, but I let them know that if they cannot address my issue, I will choose to move (whenever possible) to a different organization that is more supportive of my needs.

Maybe the organization will not listen this time, but they may be more receptive for the next customer.

Part 2 delivers case histories.

Part 1 explains why you might decide to use secure messaging.

If you decide you want to use a secure messaging app, here are some factors you might consider:

  • How secure is the program? Does it send your messages in plaintext or does it encrypt your communications?
  • How user friendly is it?
  • How many people overall use it? A good rule for security and privacy: do not be an early adapter! Let somebody else work the bugs out. The number of users should be at least several thousand.
  • What do users say about using it? Make sure you read both positive and negative comments. Test drive it before you trust it.
  • How many people do you know who use it? Could you persuade your family and friends to use it?
  • How much does it cost?
  • What happens to the message if the receiver is not using the same program as the sender?
    • Does it notify you first and offer other message delivery options or does the message encryption fail?
    • For those cases where the encryption fails, does the message not get sent or is it sent and stored unencrypted on the other end?
  • Will it work on other platforms besides yours? Android, iOS, Blackberry, Windows, etc.
  • Does the app include an anonymizer, such as Tor?
  • While the app itself may not cost, consider whether the messages will be sent using data or SMS? Will it cost you money from that standpoint?

The Electronic Freedom Foundation recently published an article called “The Secure Messaging Scorecard” that might help you find an app that meets your needs. Here are a few of the protocols used by the applications listed in the article:

I picked out a few apps that met all of their parameters, and put together some notes on cost, protocols, and platforms. While I have not used any of them, I am looking forward to testing them, and will let you know how it goes.


App Name Cost Platforms Protocol Notes
ChatSecure + Orbot Free; open source; GitHub iOS, Android OTR, XMPP, Tor, SQLCipher
CryptoCat Free; open source; GitHub Firefox, Chrome, Safari, Opera, OS X, iPhone; Facebook Messsenger OTR – single conversations; XMPP – group conversations Group chat, file sharing; not anonymous
Off-The-Record Messaging for Windows (Pidgin) Free Windows, GNOME2, KDE 3, KDE 4 OTR, XMPP, file transfer protocols
Off-The-Record Messaging for Mac (Adium) Free Adium 1.5 or later runs on Mac OS X 10.6.8 or newer OTR, XMPP, file transfer protocols No recent code audit
Signal (iPhone) / RedPhone (Android) Free iPhone, Android, and the browser ZTRP
Silent Phone / Silent Text Desktop: Windows ZRTP, SCIMP Used for calling, texting, video chatting, or sending files
Telegram (secret chats) Free Android, iPhone / iPad, Windows Phone, Web- version, OS X (10.7 up), Windows/Mac/Linux Mproto Cloud-based; runs a cracking contest periodically
TextSecure Free Android Curve25519, AES-256, HMAC-SHA256.


When you send a message, who controls your messages? You write them and you get them, but what happens in the middle? Where are they stored? Who can read them? Email, texts, instant messaging and Internet relay chat (IRC), videos, photos, and (of course) phone calls all require software. Those programs are loaded on your phone or your tablet by the device manufacturer and the service provider. However, you can choose to use other – more secure – programs.

In the old days of the 20th century, a landline telephone call (or a fax) was an example of point-to-point service. Except for wiretaps or party lines, or situations where you might be overheard or the fax intercepted, that type of messaging was reasonably secure. Today, messaging does not usually go from your device—whether it is a cell phone, laptop, computer, or tablet—directly to the receiver’s device. Landlines are becoming scarcer, as digital phones using Voice over IP (VoIP) are becoming more prevalent. Messages are just like any other Internet activities: something (or someone) is in the middle.

It’s a lot like the days when an operator was necessary to connect your call. You are never really sure if someone is listening to your message.

What that means is that a digital message is not be secure without taking extra precautions. It may go directly from your device to your provider’s network or it may be forwarded from another network; it often depends on where you are located in relation to a cell phone tower and how busy it is. Once the message has reached your provider’s network, it may bounce to a couple of locations on their network, and then—depending on whether your friend is a subscriber of the same provider—the message may stay on the same network or it may hop to another provider’s network, where it will be stored on their servers, and then finally be delivered to the recipient.

Understand that data has different states and how the data is treated may be different depending on the state. Data can be encrypted when it is transmitted and it can be encrypted when it is stored, or it can remain unencrypted in either state.

Everywhere it stops on the path from your device to the destination, the message is stored. The length of time it is kept in storage depends on the provider’s procedures, and it could be kept for weeks or even years. It gets backed up and it may be sent to offsite storage. At any time along its travels, it can be lost, stolen, intercepted or subpoenaed. If the message itself is encrypted, it cannot be read without access to the key. If the application is your provider’s, they may have access to the message even if it is encrypted if they have access to the key.

Is the message sent over an encrypted channel or is it sent in plain text? If you are sending pictures of LOLZ cats, who cares? But if you are discussing, say, a work-related topic, or a medical or any other confidential issue, you might not want your messages available on the open air. In fact, it’s better for you and your employer if you keep your work and personal information separated on your devices. This can happen by carrying a device strictly for work or maybe through a Mobile Device Management application your employer installed that is a container for your employer’s information. If you do not keep your information separate and your job suddenly comes to an end, they may have the right to wipe your personal device or you may not be able to retrieve any personal information stored on a work phone. Those policies you barely glanced at before you signed them when you started working at XYZ Corporation? It is a good idea to review them at least once a year and have a contingency plan! I have heard horror stories about baby pictures and novels that were lost forever after a job change.

Are you paranoid yet? If not, I have not explained this very well!

A messaging app that uses encryption can protect your communications with the following disclaimers. These apps cannot protect you against a key logger or malware designed to intercept your communications. They cannot protect you if someone has physical or root access to your phone. That is one of the reasons that jail-breaking your phone is such a bad idea—you are breaking your phone’s built-in security protections.

An app also cannot protect you against leaks by someone you trusted with your information. Remember: If you do not want the files or the texts you send to be leaked by someone else, do not send the information.

If you decide that you want to try one or more messaging applications, it is really important to read the documentation thoroughly so you understand what the app does and what it does not do and how to use it correctly. And, finally: Do not forget your passphrase!! Using a password manager such as KeePass or LastPass is a necessity today. Also back up your passwords regularly and put a copy—digital and/or paper—of any passwords you cannot afford to lose in a safe deposit box or cloud storage. If you decide to use cloud storage, make sure you encrypt the file before you upload it. Cloud storage is a term that means you are storing your stuff on someone else’s computer.

Part 2