Professional Security Solutions

GROUND Security provides customized information security solutions for small- and medium-sized businesses. Our experienced professionals bring a wealth of knowledge and talent into each environment that we work with. Let us work to help keep you secure in an unsafe world.

Learn More
  • Any technology, any business. We'll help keep you secure in an insecure world.

    Any technology, any business. We'll help keep you secure in an insecure world.

  • What can the cloud do for your business? We have the answers.

    What can the cloud do for your business? We have the answers.

SECURITY CONSULTING

SECURITY CONSULTING

Engage with one of our consultants for a review of your computing environment. We'll recommend and help you implement solutions that help ensure security, safety, and regulatory compliance.
Read More
RETAIL AND RESTAURANT

RETAIL AND RESTAURANT

Seamless solutions for your retail branch, restaurant, and other small businesses. Offload technology and PCI compliance to our experts who will be with you every step of the way.
Read More
BUSINESS AUTOMATION

BUSINESS AUTOMATION

Add intelligence to your processes by combining cloud-based tools and logic with your data. Work faster and better by processing your company's data into actionable information.
Read More
SECURE NETWORKING

SECURE NETWORKING

Our team of experienced professionals can help design, install, and maintain secure network and server solutions to meet your business's needs. Have a problem with an existing system? We can provide timely support with quick resolution times.
Read More

Why are we bashing 2FA?

I’m a huge proponent of Two-Factor Authentication (2FA). Is it a perfect system? Absolutely not, but what system is? There are more secure second-factors and less secure second-factor methods; every method has its own pro’s and its own con’s, as well as the issue of many legacy login systems not being compatible with 2FA to begin with. But do you know one thing that 2FA is not? It is not less secure than not having it at all.

Passwords were never going to be the ‘forever solution’ to security; they’re too vulnerable in too many ways. In fact, 2FA is almost certainly not going to be a forever solution either – it still leaves too much room for fraud or human error in the authentication process. But the fact of the matter is that layering on the second factor extends the useful life of password protection in today’s digital age, mitigating (in many cases) the risk of using weak passwords; not to mention the problems caused by the constant breaches of systems around the world leading to the leaking and compromising of millions of passwords at a time.

That’s why I continue to be appalled at how many people seem to criticize 2FA as a methodology. It’s almost like they see one flaw in the system and then bam! The whole system is worthless. Last month I read an article from The Register which seemed to take that exact stance.

I understand that security is a trade-off between user convenience and information protection. It’s no different than physical security – you think anyone enjoys dealing with TSA on their way to catch a flight? But there does need to be an understanding amongst us. An understanding that security is there for a reason; an understanding that cyber criminals will steal peoples’ credentials for no reason and with no bias. This is why security is important and why it will, by definition, cause inconvenience. You could say then that the security itself is not to blame for the inconvenience, but rather the douche bags that cause the need for the security in the first place. If we compare this to airport security, the attempted shoe bomber is the reason that we have to take our shoes off now… thanks a lot Richard Reid. Ok, rant over, let’s get back to 2FA.

In the article from The Register, author Alistair Dabbs makes an interesting – maybe he thinks it is profound – point that his cat does not need 2FA to access the cat door: “the only reason it works brilliantly for my cat is that the other cats in my neighbourhood don’t have any programming skills” (para. 14). Ok, well there is that, but there’s also the fact that cats are (arguably) not malicious by nature. They *usually* don’t break into your house to steal the sticky note with your bank account login written on it. They *usually* don’t ransack your place looking for your social security number. I mean maybe it happens. Maybe?

Humor aside, cats don’t need 2FA because cats don’t exhibit the malicious behaviors that humans do. Cats don’t phish each others’ email boxes trying to steal login credentials. Humans on the other hand do have to live in this type of world, where there are seemingly more people than not who will try to screw you out of your Facebook login or the digits on your credit card just so that they can make a quick buck. Take a quick glance at some of the breaches listed in HIBP and the magnitude of this is astounding… and these are just known breaches. Even worse, the people that do this are frighteningly good at this trade. Phishing techniques are getting increasingly more believable and it seems impossible to have 100% of your user base adequately trained on these types of threats. This is why we need 2FA. Is it inconvenient? Absolutely. Is it necessary? Absolutely.

One struggle today is that we’re accessing many platforms that provide no 2FA support at all; or, commonly, those that provide 2FA capabilities but only through either inconvenient or insecure means (such as TOTP via app or via SMS, respectively). The lack of broad acceptance or mass implementation of 2FA creates problems because simple usernames and passwords are clearly a broken form of authentication. My hope is that we’ll continue to make strides towards a password-less future, but that time is a long ways off. Until then we need to implore developers to add some form of multifactor auth to their applications.

Like I said earlier, 2FA is not going to be a ‘forever solution’ to security. I don’t know if there will ever be one as criminals will always work to break the system. But 2FA is, if nothing else, an improvement to passwords by themselves – we should appreciate it for that while also being mindful of its limits as we strive towards a forever solution.

A Third Florida Town Succumbs to Ransomware

I’m betting that there will some openings for IT positions coming soon in various Florida municipalities. Ars Technica reported today that a third (that’s right – third) city in Florida fell victim to ransomware. According to Ars, Key Biscayne, FL was breached using the Ryuk strain of ransomware (the same as Lake City, FL on June 10th which cost that city just shy of half a million dollars in bitcoin, and possibly the same as Riviera Beach, FL which cost that city $600k).

Key Biscayne became infected with Ryuk through what is known as a triple-threat attack: Emotet, in this instance, was brought into the network as a result of a successful phishing email. Emotet was used as a dropper to bring in the Trickbot trojan, which allowed the attackers lateral movement throughout the city’s infrastructure. At this point, the attackers had enough control to be able to infect the city’s systems with Ryuk – and game over. The city held a meeting tonight in which, one would assume, they’d decide on whether or not they need to pay the ransom or not. That decision has not yet been made known.

Let’s think about the different ways that this attack should have been – but was not – stopped in its tracks before it had a chance to wreak this havoc:

  • Employee Security Awareness training. Training employees to avoid clicking on phishing links will help; although it is still subject to human error and ignorance. I recommend conducting phishing security tests and following up with necessary training – KnowBe4’s training platform is an awesome tool for this purpose.
  • Key Biscayne’s MX record points to keybiscayne-fl-gov.mail.protection.outlook.com, meaning that they’re using email services through Office 365’s Exchange Online platform but also likely that they’re relying solely on Microsoft’s spam filtering. Microsoft is not ineffective at preventing phishing emails, but plenty of phishing emails will inevitably get through. Microsoft provides some advanced phishing prevention when users are assigned Advanced Threat Protection (ATP) licenses – it’s unclear if the city had subscribed to this additional license.
  • Lateral movement was allowed from the infected user’s workstation to the back-end server infrastructure. Key Biscayne is a small city with a population of only around 3,000. What likely happened here is that the city did not invest enough in their IT infrastructure. Most network admins these days recognize the importance of minimizing the possibility for lateral movement, but in an organization this small I imagine that it was not recognized as an important security control.
  • Why aren’t we restoring from backups? I’m making the assumption that they can’t, and that they’ll eventually be paying the ransom. Will this lesson ever be learned by our IT organizations? Back up your data, air-gap it so that the backups can’t be compromised, and have a disaster recovery plan in place that details how you’ll restore systems in a worst-case scenario such as this.

More details are sure to emerge regarding Key Biscayne’s ransom payment decision over the next few days. If anyone from the city happens to read this, please let us know if we can be of assistance in the recovery of your systems.

Actually, I’d say that Teams is Killing Slack

A recent article from Medium explained how Slack’s amazingly successful IPO prompted Microsoft to ban Slack internally in a seemingly jealous rampage. The article’s author, Michael Spencer, cited Friday’s report from GeekWire which revealed Microsoft’s internal Slack ban (at least bans on Slack versions other than Enterprise Grid) as well as the discouragement of other competing platforms such as Google Docs and AWS.

But I think the big picture is being missed here – I believe that Microsoft’s tactic was to help drive home the point that Teams is superior to Slack in terms of security. Security might be the single biggest advantage of Teams that Microsoft continues to dangle over Slack; and the internal ban is just another way to draw attention towards it. The timing was well-planned on Microsoft’s part; everyone is talking about the successful IPO, but Microsoft wants the conversation to swing back towards Slack’s deficiencies.

Further, the fact that they specifically call out Slack’s free/affordable versions (Free, Standard, Plus) while the expensive Enterprise Grid is exempt from the ban? That’s another good dig at Slack – Microsoft wants people to remember that affordable Slack = insecure Slack. Microsoft maintains the advantage of pricing, in many cases, due to the number of organizations that are already subscribed to Office 365’s plethora of other cloud services. Teams is a no-brainer for those organizations, who either gets Teams included completely included in their existing subscription or have it at least partially included. There’s also the free version which Microsoft introduced to compete directly with Slack’s free version.

So let’s not get too caught up with the idea that Microsoft is a jealous little kid throwing a temper tantrum in front of all of its employees. I think they’re better than that – and frankly, why would they be jealous when Teams seems to be product coming out on top anyway? Spiceworks conducted a study in December showing that more organizations reported using Teams than Slack. That same study showed that, while many companies are still using Skype for Business, many more will be switching to Teams very soon.

Google Docs and Amazon’s AWS are also discouraged from use according to Microsoft’s internal list. This is simply expected – why would Microsoft want to incur the costs of services that it already provides a competing solution for? What should be noted here is that their use is discouraged, not prohibited. Microsoft is allowing these competing services to be used when needed – perhaps for keeping up on their new features – but avoiding unnecessary costs of using them when Office or Azure will do the trick. But with Slack (and Grammarly, also) it is a complete ban on the lower versions and not just a discouragement. Sorry Slack, Microsoft says you’re not secure enough to play with the big boys.

Desjardins Group: Another Slap on the Wrist from Lawmakers

This is getting ridiculous: companies continue to lose peoples’ personal data and no one seems to care to do anything about it. Where’d it happen at this time? The Desjardins Group credit union co-op, a financial institution that you’d expect would have some of the tightest controls to prevent this kind of breach.

According to CBC/Radio-Canada, this breach was not caused by the nefarious hacks that seem to frequent today, but rather by an insider – an employee who accessed the data of 2.9 million Caisse Desjardins members and decided to share the information outside of the company. Details are sparse, but supposedly the compromised information included “names, addresses, birth dates, social insurance numbers, email addresses and information about transaction habits,” but not passwords, security questions, nor PINs. Well there’s some silver lining: they only stole your personal information, not the information that would let money be withdrawn out of the credit union’s coffers. Doesn’t that make you feel better?

Let’s think for a moment about how this type of data breach would have been possible: First, the malicious insider had a nefarious reason to do it. Maybe it’s being sold? We’re not yet told who this data was shared with (or sold to) but if I was one of these members I’d be very concerned about identity theft right now. From an information security perspective the insider would have needed access to the databases containing this data. Or maybe they improperly accessed a backup copy of the data, stored without proper security controls? Then, they would have been able to obtain the data as well as share it without too many red flags being thrown up. Details are sparse, but if it was shared externally over the network then it apparently was not caught by any Data Loss Prevention (DLP) systems. If it was carried out on media such as a flash drive then there doesn’t appear to be much control around that. Thinking this through, it raises so many questions that need to be asked:

  1. Was the employee’s role one that would have given them access to this data? Or did they find a way around access controls?
  2. How did the action of downloading the data of 2.9 million users not throw up more red flags than it did? How was it allowed in the first place? Database queries to do this should have been setting off all kinds of alarms in any decent SIEM or IDS.
  3. Why did DLP or physical media controls not prevent the exfiltration of the data?
  4. Why was the data stored in an unencrypted format? The fact that some data was lost while other data was not suggests that the Desjardins Group, apparently, cares more about passwords/security questions/PINS more than it cares about the personal information of its 2.9 million members.

Despite the severity of this breach (and the apparent lack of security controls at a financial institution!) and other breaches like it, doesn’t it seem like lawmakers are not doing enough about it? CBC/Radio-Canada’s article states that “Quebec’s regulator of financial institutions, the Autorités des marchés financiers (AMF), described the situation as ‘very serious’ but said it is ‘satisfied with the actions’ taken so far by Desjardins Group” (para. 9). Sounds like Desjardins Group will just be getting a slap on the wrist to me. Maybe a small fine? In my opinion, organizations that fail to adequately protect consumer data should be fined in a massive way – one that sets examples for other companies – and the affected consumers should also receive direct financial compensation, not just the B.S. publicity stunt of being given a year of free credit monitoring. Money is what these companies know, and that’s what will make them start caring.

Microsoft, sometimes you annoy me

Microsoft Teams celebrated its 2-year birthday in March of this year. I really like Teams overall, but Microsoft has seriously slacked in some of the areas where it continues to need support. Whether these problems should be blamed on Teams or blamed on other products, they’re all still Microsoft.

Connectors for Flow are still in Preview?

One of Teams’ big selling points is integration with other apps. Above all else, I’d expect Microsoft’s own internal integrations to be fantastic and work flawlessly. Sadly, Microsoft has let me down in this regard. 2 years after general availability, the 11 actions that Flow can support when connecting to Teams are still listed as being in ‘Preview.’ Further, the capabilities that they give us are half-baked. I can’t use Flow to @mention a user unless it comes from the Flow bot? What?!

Hidden Annoyances – They Don’t Piss You Off Until You Find Them

For example: Cards are a great concept in Office 365, but they’re a real PITA to get working correctly. If you’ve ever tried to send more than the most basic Card to Teams you probably know what I mean. For instance, sending a card to Teams via Flow requires us to send a generic HTTP POST (see previous section – don’t get me started on why there isn’t native integration for this). This works, however it only works if you have the exact right template to use anything else just fails. MessageCardPlayground helps but still leaves a lot of usefulness to be desired. I haven’t tried AMDesigner yet but maybe it will help fill some of those Card-shaped holes in my heart?

Still on the subject of Cards, don’t go quickly thinking that you can easy POST back from Teams into Flow! No, that’d be too easy. For some reason Microsoft, in its ultimate wisdom, decided that when you click an action button in Teams that sends a POST back it will include a JSON Web Token (JWT) that completely breaks Flow’s ability to receive the message (it sounds like what technically happens is that Flow sees the JWT and then completely disregards the additional bearer token which is what it actually needs). Stack Overflow has a thread about this where it sounds like Microsoft is aware of the issue but really has no sense of urgency to make their products work well with each other. I’ve been forced to come up with a intermediary – a proxy of sorts – that my Card buttons can target, be stripped of the JWT, and then be sent to Flow for it to actually work properly.

Lack of Administrative Capabilities

Teams does offer more administrative capability that Skype ever did – but it still isn’t enough. Where is our ability to restrict posting permissions to individual channels within Teams? Sure we can do that for the General channel, but that isn’t enough. Where is our ability to have new Teams members auto-follow (not just auto-favorite) certain channels? Microsoft needs to remember that Teams is being used for organizations where people are not always (actually, are seldom) technically inclined – they don’t care to spend their time going into each channel and clicking “Follow this channel” even if it is something that would help them. PS – anyone reading this who wants to help get this changed, get on UserVoice and help bump it up!

Experience: Duo Security Multifactor Authentication with Office 365

Duo Security is an industry leader in MultiFactor Authentication (MFA) and zero-trust security solutions. Many organizations choose to federate their on-premise identity – Active Directory – with Microsoft so that users have a Single Sign On (SSO) experience when accessing 365 – this is, in many cases, achieved using ADFS. Duo conveniently provides a plugin for ADFS so that MFA can be bolted on to the existing SSO solution.

In fact, I found it to be just about that easy. In my ADFS 4.0 environment Duo’s plugin installed seamlessly and was instantly available for MFA within ADFS. The great thing about this is that, when your users authenticate through a web browser (such as to OWA), if they’re not already enrolled they can be prompted to enroll at that time. This makes user onboarding simple and easy.

Duo’s combination of access policies can be combined with ADFS’ claim rules for a very customizable experience. In my case, I chose to simply require 2FA only for extranet connections:

ADFS access control rules

Just to be sure, I also whitelisted my public IP address within Duo’s application policy. The solution works beautifully – if we connect from outside of our network to OWA (via a web browser), the user will see Duo integrated with the ADFS login page as a next step after a successful login:

Duo's two-factor authentication (2FA) prompt in ADFS

The “Gotchas”

Now, let’s talk about the caveats of this solution. They are few, but they do need to be planned for

Modern Authentication

Non-browser connections (such as those from Outlook installed on user desktops) will now require Modern Authentication. This won’t be a big deal for most organizations, but it does restrict what Outlook clients can be used as well as what mobile mail clients can be used as well. Outlook 2013 or newer is required, though Outlook 2013 will require a registry change to be compatible. For mobile clients, check out Duo’s KB article for more detail.

Mail Relay

Microsoft’s documentation lists 3 options for how to relay mail through Exchange Online, the first of which is SMTP Client Submission – relaying through Exchange Online using an authenticated connection on port 587. This is commonly accomplished using the Windows built-in SMTP relay in IIS, and if this is how you’re relaying then this method will stop working. A good workaround is to add a connector for the IP address that your mail relay sends from, and then reconfiguring it to send to yourdomain-com.mail.protection.outlook.com on port 25. For more information, see option #3 of the same Microsoft article.

Alternatively, if you have another domain setup in your 365 account that is not managed (configured for ADFS), you can use an account in this domain to continue to relay since the login for this account will not be subject to Duo’s MFA.

PowerApps / Flow / Other 365 services

Connectors setup in PowerApps and Flow will need to re-authenticate, with MFA, based on the policies setup within Duo. This will be a pain because it will require human interaction to do so. As mentioned in the previous paragraph, if you have a non-managed domain setup in your tenant then it may be easiest to create these connections with an account from one of these domains.

ASCO Industries Falls Victim to Ransomware

Help Net Security reported yesterday that ASCO Industries, an aerospace manufacturing company, was impacted by a ransomware infection severe enough for them to suspend their manufacturing operations around the globe.

It continues to amaze me how effective ransomware is at grinding a business operations to a halt. Ransomware isn’t new by any means; however, organizations don’t seem to be taking the threat seriously. Employees remain extremely vulnerable to phishing tactics that often let malware into the network, however IT departments should be more prepared for this sort of outbreak than they seem to be. Ransomware should be curable with a quick restore of infected systems, and then you’re back online. Users workstations? Re-image and call it a day.

The blame here does fall on the IT organization themselves for being ill-prepared. I don’t pretend to be knowledgeable about the ins-and-outs of ASCO Industries’ IT environment, but today’s hyper-connected world demands that IT professionals rise to the call of taking reasonable measures to protect their environment. We’re not talking about anything crazy, just common protective measures such as:

  • Backups of all servers to meet RTO/RPO as determined by business needs.
  • Endpoint protection – a reputable antivirus and intrusion prevention solution. It won’t catch everything but it is still an absolute necessity.
  • A segregated network. In this specific example it seems logical that the manufacturing network should be separate and more locked-down than other client networks – so why was the production line impacted?
  • An incident response plan: so a workstation does get infected, what do we do? This doesn’t have to be rocket science, it might be as simple as disconnect from the network until the station is re-imaged.
  • Security awareness training. This is no longer optional – staff need to be trained on threats such as phishing, social engineering, and basic information security concepts.

The biggest problem that I’ve seen is lack of urgency on the IT organization’s part to accomplish these bare minimums. It may also be influenced by insufficient understanding (and maybe lack of proper budget allocation) from the C-level executives in the organization. One thing I’m sure of is that the folks at ASCO Industries are re-evaluating those priorities right now.

Sam’s Club Data Breach?

Just a theory – but it may be possible that Sam’s Club (specifically one of their mobile apps, or the data linked to it) has been compromised. I’ve had two reports of users who have recently used the Scan & Go feature only to shortly afterwards find that their account was fraudulently accessed and used.

Sam’s Club recently made a change where they integrated the Scan & Go functionality into their main mobile app. Coincidental timing? Please reach out if you have any information that could help track this down.

Creatively move Mac OS X 10.6.8 Wiki to SharePoint Online

Isn’t it frustrating when it’s nearly impossible to find information on moving data to new systems? This is one reason I shy away from recommending data systems that are not business-class… such as the Wiki server feature found in OS X Snow Leopard. I wanted to share this post due to my frustration of not being able to find an easy migration path away from this Wiki software and onto something more business-friendly. Hopefully it can help someone in a similar situation.

I’ve been working a lot lately in Microsoft Flow – an automation tool that works great for workflows and connecting different software packages to make them work together. Because Flow is designed for repetitive tasks, it seemed like it could be a candidate for this function even though you wouldn’t normally think of it for this purpose.

Here’s a little background on the native state of the Wiki, and what we’re dealing with:

  • The raw files are stored on the Mac server in  /Applications/Server.app/Contents/ServerRoot/usr/share/collabd
  • Each subfolder.page is basically an individual post.
  • The files in each of these folders that you’ll want to work with are the .plist files. You’d think that you’d want the .html files, but Apple doesn’t make it that easy for you. Looking at the .plist in a text editor we see that the .plist is where almost all of the post detail is stored in an HTML-like format.
  • If the post included attachments such as files or images, they should also reside in this folder.

Let me start off with a couple of caveats before I go into the technical detail of how this will work. First, this is a basic migration solution. There is no native compatibility between the OS X Wiki and, well, pretty much anything else. This process at least allows us to grab some of the more useful information from the Wiki and take it to a system which will hopefully be a more future-proofed home; in this case, SharePoint Online. But because of this, we’re going to lose a lot of the features that were used on the Mac Wiki: user comments, inline pictures, and file attachments are some of the things that will not carryover (or at least nicely). If the Wiki was used as a knowledge base of sorts, which my example was, then this may be OK. The frequently-accessed KB articles can be easily mended by hand and the others can be handled by attrition.

OK, lets get to it. We’re going to start with a blank Flow. Now right off the bat you’re going to have some choices to make; the first of which being how to get Flow access to the .plist files to begin with. I chose to accomplish this with a OneDrive folder. This worked really well for a couple of reasons:

  • Flow can access OneDrive for Business with a native connector, and you can trigger your Flow when it sees new files in the specified OneDrive folder. This is just straight convenience.
  • If you have Flow you should have OneDrive, so why not?

Whatever you choose, one more catch with Flow that hung me up for a second was the file format. It didn’t want to try to read the data from a .plist file. So before you bring them into Flow, change the file extension to something less Apple-ish such as .txt or .html.

Once you have your file import method sorted out then we need to start pulling the data out of the .plist’s. I first identified the data fields that were most important to me that I needed to make sure pulled over into our new Wiki. For me, these were:

  1. Title
  2. Author
  3. Created Date
  4. Body
  5. UID (I’ll explain this one more later)

It’s important to understand the layout of the .plist files. Within the .plist each of the fields we want is encaspulated in <key></key> tags. Then, the content that corresponds to that field will immediately follow it within <string></string> tags. Within Flow we can use this to our advantage to use the particular key tags to determine where in the file to pull content from.

In Flow, I started with initializing a string variable for each chunk of content that I wanted to end up with:

For the Title, I used substring() to extract the data between certain points starting at the phrase “>title<” as shown below:

substring(triggerBody(),add(lastIndexOf(triggerBody(),’>title<‘),22),sub(sub(lastIndexOf(triggerBody(),’>tombstoned<‘),15),Add(lastIndexOf(triggerBody(),’>title<‘),22)))

Basically all we’re doing here is calculating the position of the start of the data (we know where it starts because it begins with <key>title</key> and is immediately followed by the next content tag <key>tombstoned</key>. Notice in my expression above that I chose to use >title< and >tombstoned< rather than <key>title</key> and <key>tombstoned</key>. This is because I found that the substring() function in Flow did not seem to like the expression if it was built with these full tags, something with the extra special characters made it throw an error basically stating the there was zero-length content.

Luckily for us the date is in a format that is easy for Flow to consume. Since it does come over in UTC though, after we grab the content using substring() we send it to the ‘Convert Time Zone’ action to get it into our local time zone:

The actual body of the Wiki post posed a new issue for us; look closely and you’ll see that it is typical HTML, but uses character entities for the greater than/less than symbols: &lt; for < and &gt; for >. If you want to leave them you can, however for my purpose I took them out using a couple of quick expressions using the replace() function:

replace(variables(‘varHTMLBody’),’&lt;’,'<‘)

Now that we have our data, it’s time to start inserting it into HTML that we can usefully take wherever we want. I accomplished this by using a number of Compose actions that brought the content into the correct places between HTML tags:

… and further brought individual pieces together with concat(variables(‘strHeaderHTML1’),variables(‘strHTMLBeforeTitle’))

Take note in the image above that I included the original page’s meta data as part of the HTML header. What this does for us is allow us to very quickly match one of our new Wiki pages with the old folder from OS X. This gives us the ability to go back and grab attachments, images, or the raw files again if needed.

Now that we have our HTML content all brought together, in order to build the file itself I once again turned to OneDrive to build a brand new file with the HTML content. I also saved my files as .aspx since I wanted to ultimately bring them into SharePoint:

We’re technically done with the conversion at this point; but why not use Flow to also get the converted file to its final destination? I mentioned earlier that, for me, this was SharePoint Online. Good thing we used Flow, we can do this part automagically too:
We can do a little better than this, though. I brought the files into a Wiki page of SharePoint, so I’d really like them to be as useful as possible to the staff who will be using these pages. Sadly I still can’t make it work to where they could be natively-edited in the new format and in the new Wiki, but the best I can do is give my users a table of contents so they can at least get to the pages quickly and easily. To help me accomplish this I used Flow to build the hyperlinks that I’d need for this and dump them into an Excel spreadsheet that I can copy from. I can build the table quickly and easily and then just copy and paste it into the Wiki Home.aspx page so that at least it looks nice and native for our users:
In the end, we have something that is at least useable. This is quite a hack to automate the migration of the OS X Wiki into something else, but I honestly did not have any luck finding 3rd-party software to do it for me. If you know of something, please let me know. Good luck!

Traditional vs. Next-Gen

I had an interesting conversation several days ago with a network admin who was looking into making changes to the network at his company’s main office. This office housed around 100 folks and was fairly straight-forward with technology needs. They had a handful of VLANs for different departments and functions.

What I liked about this setup was the fact that the VLANs were all trunked through to the pair of high-performance, high-availability firewalls at the office that were also the site’s L3 routers. In this way they were able to apply security filtering (AV/IPS/App control) to all inter-VLAN connections rather than leaving this protection at the internet border only. The network admin that I was conversing with wanted to break off this routing, though, so that all VLANs terminated at a dedicated router and the firewalls would only be used as the border gateway.

This is the traditional Cisco way of thinking, and functionally it works. It works great! I have a background in Cisco networking so I understand this very well, and I also realize that different size networks will have different needs – not every design works efficiently for every network. Keep in mind that I’m writing this here while thinking about this small office, and so many companies I’ve worked with that have offices of similar sizes.

Unfortunately, times are changing and this separation of router and firewall is no longer the best direction for small sites. After a few quick searches you can see that more and more threats today come from inside the network. New technology concepts such as BYOD, IoT, web proxies and private VPN’s are all technical contributors to this problem. Especially considering the human factor, administrators should no longer completely trust internal devices. It is too easy for a user to take home their work laptop home and come back into the trusted network where a new virus on that machine can spread un-checked. The typical IT organizations managing these smaller businesses no longer have reasons to allow this to happen:

  1. High-performance network devices are common and affordable; performance on the network cannot be a reason to not implement Next-Gen Firewall (NGFW) protection. Throughput on today’s hardware with NGFW features enabled can easily be greater than 1Gbps while still being very affordable, even for small businesses.
  2. Network availability is not a concern as any business-grade equipment from a reputable vendor should support HA capabilities. Insist on stacked switches for redundancy behind those firewalls? Great! Go for it. Just don’t let those switches be your internal layer 3 routers.
  3. Firewalling should be more than just blocking and allowing ports on the network. Here is the big differentiator between your common router and your NGFW firewall: the router with an ACL is only going to block ports/IP addresses. A firewall of course has this capability, but adds user identification, antivirus, intrusion detection, application control, DDoS protection, and more. If you’re saying to yourself that you’re fine with your Cisco 2900 router because you have ACL’s between your VLANs, you’re wrong. If you want to keep them that is your choice; maybe add a transparent firewall in there too, though.

Lets take network security to the next level. Don’t assume that yesterday’s network design is still the best fit for today’s world. And don’t assume that your inside devices are trusted! Take steps to protect your network at every level. That’s next-gen thinking.