Rant

Why are we bashing 2FA?

I’m a huge proponent of Two-Factor Authentication (2FA). Is it a perfect system? Absolutely not, but what system is? There are more secure second-factors and less secure second-factor methods; every method has its own pro’s and its own con’s, as well as the issue of many legacy login systems not being compatible with 2FA to begin with. But do you know one thing that 2FA is not? It is not less secure than not having it at all.

Passwords were never going to be the ‘forever solution’ to security; they’re too vulnerable in too many ways. In fact, 2FA is almost certainly not going to be a forever solution either – it still leaves too much room for fraud or human error in the authentication process. But the fact of the matter is that layering on the second factor extends the useful life of password protection in today’s digital age, mitigating (in many cases) the risk of using weak passwords; not to mention the problems caused by the constant breaches of systems around the world leading to the leaking and compromising of millions of passwords at a time.

That’s why I continue to be appalled at how many people seem to criticize 2FA as a methodology. It’s almost like they see one flaw in the system and then bam! The whole system is worthless. Last month I read an article from The Register which seemed to take that exact stance.

I understand that security is a trade-off between user convenience and information protection. It’s no different than physical security – you think anyone enjoys dealing with TSA on their way to catch a flight? But there does need to be an understanding amongst us. An understanding that security is there for a reason; an understanding that cyber criminals will steal peoples’ credentials for no reason and with no bias. This is why security is important and why it will, by definition, cause inconvenience. You could say then that the security itself is not to blame for the inconvenience, but rather the douche bags that cause the need for the security in the first place. If we compare this to airport security, the attempted shoe bomber is the reason that we have to take our shoes off now… thanks a lot Richard Reid. Ok, rant over, let’s get back to 2FA.

In the article from The Register, author Alistair Dabbs makes an interesting – maybe he thinks it is profound – point that his cat does not need 2FA to access the cat door: “the only reason it works brilliantly for my cat is that the other cats in my neighbourhood don’t have any programming skills” (para. 14). Ok, well there is that, but there’s also the fact that cats are (arguably) not malicious by nature. They *usually* don’t break into your house to steal the sticky note with your bank account login written on it. They *usually* don’t ransack your place looking for your social security number. I mean maybe it happens. Maybe?

Humor aside, cats don’t need 2FA because cats don’t exhibit the malicious behaviors that humans do. Cats don’t phish each others’ email boxes trying to steal login credentials. Humans on the other hand do have to live in this type of world, where there are seemingly more people than not who will try to screw you out of your Facebook login or the digits on your credit card just so that they can make a quick buck. Take a quick glance at some of the breaches listed in HIBP and the magnitude of this is astounding… and these are just known breaches. Even worse, the people that do this are frighteningly good at this trade. Phishing techniques are getting increasingly more believable and it seems impossible to have 100% of your user base adequately trained on these types of threats. This is why we need 2FA. Is it inconvenient? Absolutely. Is it necessary? Absolutely.

One struggle today is that we’re accessing many platforms that provide no 2FA support at all; or, commonly, those that provide 2FA capabilities but only through either inconvenient or insecure means (such as TOTP via app or via SMS, respectively). The lack of broad acceptance or mass implementation of 2FA creates problems because simple usernames and passwords are clearly a broken form of authentication. My hope is that we’ll continue to make strides towards a password-less future, but that time is a long ways off. Until then we need to implore developers to add some form of multifactor auth to their applications.

Like I said earlier, 2FA is not going to be a ‘forever solution’ to security. I don’t know if there will ever be one as criminals will always work to break the system. But 2FA is, if nothing else, an improvement to passwords by themselves – we should appreciate it for that while also being mindful of its limits as we strive towards a forever solution.

Microsoft, sometimes you annoy me

Microsoft Teams celebrated its 2-year birthday in March of this year. I really like Teams overall, but Microsoft has seriously slacked in some of the areas where it continues to need support. Whether these problems should be blamed on Teams or blamed on other products, they’re all still Microsoft.

Connectors for Flow are still in Preview?

One of Teams’ big selling points is integration with other apps. Above all else, I’d expect Microsoft’s own internal integrations to be fantastic and work flawlessly. Sadly, Microsoft has let me down in this regard. 2 years after general availability, the 11 actions that Flow can support when connecting to Teams are still listed as being in ‘Preview.’ Further, the capabilities that they give us are half-baked. I can’t use Flow to @mention a user unless it comes from the Flow bot? What?!

Hidden Annoyances – They Don’t Piss You Off Until You Find Them

For example: Cards are a great concept in Office 365, but they’re a real PITA to get working correctly. If you’ve ever tried to send more than the most basic Card to Teams you probably know what I mean. For instance, sending a card to Teams via Flow requires us to send a generic HTTP POST (see previous section – don’t get me started on why there isn’t native integration for this). This works, however it only works if you have the exact right template to use anything else just fails. MessageCardPlayground helps but still leaves a lot of usefulness to be desired. I haven’t tried AMDesigner yet but maybe it will help fill some of those Card-shaped holes in my heart?

Still on the subject of Cards, don’t go quickly thinking that you can easy POST back from Teams into Flow! No, that’d be too easy. For some reason Microsoft, in its ultimate wisdom, decided that when you click an action button in Teams that sends a POST back it will include a JSON Web Token (JWT) that completely breaks Flow’s ability to receive the message (it sounds like what technically happens is that Flow sees the JWT and then completely disregards the additional bearer token which is what it actually needs). Stack Overflow has a thread about this where it sounds like Microsoft is aware of the issue but really has no sense of urgency to make their products work well with each other. I’ve been forced to come up with a intermediary – a proxy of sorts – that my Card buttons can target, be stripped of the JWT, and then be sent to Flow for it to actually work properly.

Lack of Administrative Capabilities

Teams does offer more administrative capability that Skype ever did – but it still isn’t enough. Where is our ability to restrict posting permissions to individual channels within Teams? Sure we can do that for the General channel, but that isn’t enough. Where is our ability to have new Teams members auto-follow (not just auto-favorite) certain channels? Microsoft needs to remember that Teams is being used for organizations where people are not always (actually, are seldom) technically inclined – they don’t care to spend their time going into each channel and clicking “Follow this channel” even if it is something that would help them. PS – anyone reading this who wants to help get this changed, get on UserVoice and help bump it up!

Creatively move Mac OS X 10.6.8 Wiki to SharePoint Online

Isn’t it frustrating when it’s nearly impossible to find information on moving data to new systems? This is one reason I shy away from recommending data systems that are not business-class… such as the Wiki server feature found in OS X Snow Leopard. I wanted to share this post due to my frustration of not being able to find an easy migration path away from this Wiki software and onto something more business-friendly. Hopefully it can help someone in a similar situation.

I’ve been working a lot lately in Microsoft Flow – an automation tool that works great for workflows and connecting different software packages to make them work together. Because Flow is designed for repetitive tasks, it seemed like it could be a candidate for this function even though you wouldn’t normally think of it for this purpose.

Here’s a little background on the native state of the Wiki, and what we’re dealing with:

  • The raw files are stored on the Mac server in  /Applications/Server.app/Contents/ServerRoot/usr/share/collabd
  • Each subfolder.page is basically an individual post.
  • The files in each of these folders that you’ll want to work with are the .plist files. You’d think that you’d want the .html files, but Apple doesn’t make it that easy for you. Looking at the .plist in a text editor we see that the .plist is where almost all of the post detail is stored in an HTML-like format.
  • If the post included attachments such as files or images, they should also reside in this folder.

Let me start off with a couple of caveats before I go into the technical detail of how this will work. First, this is a basic migration solution. There is no native compatibility between the OS X Wiki and, well, pretty much anything else. This process at least allows us to grab some of the more useful information from the Wiki and take it to a system which will hopefully be a more future-proofed home; in this case, SharePoint Online. But because of this, we’re going to lose a lot of the features that were used on the Mac Wiki: user comments, inline pictures, and file attachments are some of the things that will not carryover (or at least nicely). If the Wiki was used as a knowledge base of sorts, which my example was, then this may be OK. The frequently-accessed KB articles can be easily mended by hand and the others can be handled by attrition.

OK, lets get to it. We’re going to start with a blank Flow. Now right off the bat you’re going to have some choices to make; the first of which being how to get Flow access to the .plist files to begin with. I chose to accomplish this with a OneDrive folder. This worked really well for a couple of reasons:

  • Flow can access OneDrive for Business with a native connector, and you can trigger your Flow when it sees new files in the specified OneDrive folder. This is just straight convenience.
  • If you have Flow you should have OneDrive, so why not?

Whatever you choose, one more catch with Flow that hung me up for a second was the file format. It didn’t want to try to read the data from a .plist file. So before you bring them into Flow, change the file extension to something less Apple-ish such as .txt or .html.

Once you have your file import method sorted out then we need to start pulling the data out of the .plist’s. I first identified the data fields that were most important to me that I needed to make sure pulled over into our new Wiki. For me, these were:

  1. Title
  2. Author
  3. Created Date
  4. Body
  5. UID (I’ll explain this one more later)

It’s important to understand the layout of the .plist files. Within the .plist each of the fields we want is encaspulated in <key></key> tags. Then, the content that corresponds to that field will immediately follow it within <string></string> tags. Within Flow we can use this to our advantage to use the particular key tags to determine where in the file to pull content from.

In Flow, I started with initializing a string variable for each chunk of content that I wanted to end up with:

For the Title, I used substring() to extract the data between certain points starting at the phrase “>title<” as shown below:

substring(triggerBody(),add(lastIndexOf(triggerBody(),’>title<‘),22),sub(sub(lastIndexOf(triggerBody(),’>tombstoned<‘),15),Add(lastIndexOf(triggerBody(),’>title<‘),22)))

Basically all we’re doing here is calculating the position of the start of the data (we know where it starts because it begins with <key>title</key> and is immediately followed by the next content tag <key>tombstoned</key>. Notice in my expression above that I chose to use >title< and >tombstoned< rather than <key>title</key> and <key>tombstoned</key>. This is because I found that the substring() function in Flow did not seem to like the expression if it was built with these full tags, something with the extra special characters made it throw an error basically stating the there was zero-length content.

Luckily for us the date is in a format that is easy for Flow to consume. Since it does come over in UTC though, after we grab the content using substring() we send it to the ‘Convert Time Zone’ action to get it into our local time zone:

The actual body of the Wiki post posed a new issue for us; look closely and you’ll see that it is typical HTML, but uses character entities for the greater than/less than symbols: &lt; for < and &gt; for >. If you want to leave them you can, however for my purpose I took them out using a couple of quick expressions using the replace() function:

replace(variables(‘varHTMLBody’),’&lt;’,'<‘)

Now that we have our data, it’s time to start inserting it into HTML that we can usefully take wherever we want. I accomplished this by using a number of Compose actions that brought the content into the correct places between HTML tags:

… and further brought individual pieces together with concat(variables(‘strHeaderHTML1’),variables(‘strHTMLBeforeTitle’))

Take note in the image above that I included the original page’s meta data as part of the HTML header. What this does for us is allow us to very quickly match one of our new Wiki pages with the old folder from OS X. This gives us the ability to go back and grab attachments, images, or the raw files again if needed.

Now that we have our HTML content all brought together, in order to build the file itself I once again turned to OneDrive to build a brand new file with the HTML content. I also saved my files as .aspx since I wanted to ultimately bring them into SharePoint:

We’re technically done with the conversion at this point; but why not use Flow to also get the converted file to its final destination? I mentioned earlier that, for me, this was SharePoint Online. Good thing we used Flow, we can do this part automagically too:
We can do a little better than this, though. I brought the files into a Wiki page of SharePoint, so I’d really like them to be as useful as possible to the staff who will be using these pages. Sadly I still can’t make it work to where they could be natively-edited in the new format and in the new Wiki, but the best I can do is give my users a table of contents so they can at least get to the pages quickly and easily. To help me accomplish this I used Flow to build the hyperlinks that I’d need for this and dump them into an Excel spreadsheet that I can copy from. I can build the table quickly and easily and then just copy and paste it into the Wiki Home.aspx page so that at least it looks nice and native for our users:
In the end, we have something that is at least useable. This is quite a hack to automate the migration of the OS X Wiki into something else, but I honestly did not have any luck finding 3rd-party software to do it for me. If you know of something, please let me know. Good luck!

Why I hate McAfee (the company) and why you should, too

Companies have a tough time fighting spam. I get it. Spam fuels the spread of viruses, phishing, identity theft, and general user confusion. I despise it as much as the next guy, but it has quickly become a part of day-to-day life with any email user or mail-enabled organization. Because of how rampant and aggressive spam email has become, as well as the ever-increasing danger of websites that spam may try to lead you to, companies that fight spam have taken up blacklisting: adding email domains and server IP addresses to one of several lists that are used be various spam filters to more easily detect spam emails. Getting on one of these blacklists can be entirely too easy, and oftentimes it is entirely too difficult to be removed once on one.

At this point you’re probably thinking “good, lets stop as many of those spammers as we can!” Well, the problem is that legitimate email-sending companies can get added to these lists. Before anyone knows what is going on, a legitimate and honest company is having problems sending (or even receiving) emails and business starts to grind down to a halt. At this point the company’s IT resources will begin sorting out the issue and eventually begging and pleading for their server to be removed from one or more blacklists that is crippling their email service. Some of the blacklist providers offer a simple web-based removal process that requires just a simple explanation… but then there is McAfee.

McAfee has a ‘special’ group within their anti-spam division, known as McAfee Messaging Security. This group, from what i have gathered, takes recommendations from affiliate organizations of domains that should be flagged as spammers and arbitrarily adds them to their blacklist without any sort of verification or validation of an actual offense. The only way to be removed from McAfee’s blacklist? Send an email to [email protected] or [email protected] and wait for them to tell you how they picked your domain randomly out of a hat and blacklisted it for no reason.

What’s the real problem, you ask? The real problem is that this Messaging Security group is ONLY AVAILABLE BY EMAIL! No phone call can reach them, no tech support case (even with Gold Support) will be escalated to them, EMAIL ONLY. So while your business is stagnant, crippled, and waiting for McAfee to get back to them to resolve the issue, your customers are fleeing, getting bounce-backs, and wondering why they aren’t receiving prompt replies. But wait, there’s more.

McAfee Messaging Security likes to keep things as vague as possible, that way you have trouble telling that they have no real reason for blacklisting you. Their first response to your email will be “uh, well, this website here has junk html files that need to be removed before we consider removal” (you may think I’m exaggerating, and I wish i was… this is how it actually happens). So, five email exchanges later (12 hours in between each one, mind you) and hopefully you’ll have the problem fixed, or at least have an idea of what you actually need to do to satisfy these ruthless email dictators. Hopefully the affected company won’t also be a subscriber of McAfee’s cloud-based spam filter, because if it is then the email replies from Messaging Security could even be caught in their own spam filter and the exchange could take even longer.

I’ll stop here with my rant. Hopefully you get the picture and take warning. McAfee produces sloppy, sub-par software and backs it with even worse service and support. McAfee is one company that I will never recommend to peers and customers for these reasons.