In February of 2017, The Australian Signals Directorate (ASD) Australian Cyber Security Centre (ACSC) published an update to their “Top 4” Strategies to Mitigate Cyber Security Incidents by revising the list to include four more crucial strategies. The “Essential Eight” has received considerable attention over the past several years although I have encountered many organisations that are unsure where to begin. In this article, I will try to give you a bit of a kick-start to help you get going in the right direction. You are not alone…. if you need help, please ask for it since we’re all on the same side!
The original ASD/ ACSC Top 4 included Application Whitelisting, Patching Applications, Restricting Administrative Privileges, and Patching Operating Systems. The Essential Eight now includes those four plus Disabling Untrusted Microsoft Office Macros, Using Application Hardening, Multi-Factor Authentication, and Daily Backups of Important Data. While the full ASD /ACSC list contains 37, your focus should be on these eight before putting too much effort into the other 29 and I’ll save those for future articles.
What Are They?
Application Whitelisting: I consider a firewall to be a Yes / No device when you strip away all the “Next Generation” and Unified Threat Management (UTM) pieces. To some degree, Application Whitelisting works the same way by specifying which applications can execute (The Whitelist) leaving everything else implicitly or explicitly denied (The Blacklist). Granted, there will always be some that fall in the middle (The Greylist) but those should be reserved for administrative decision and not for the user to decide. By the way…. make sure the aforementioned firewall also has a default “deny all” rule in place! I have seen many installations where the final rule was an “Allow All” with millions of hits against it.
Patching Applications: In a nutshell, applications are designed to perform a specific task but often don’t account for potential flaws and vulnerabilities. Unless it’s a security-centric application, security is lower on the features list… if it makes the list at all. In some cases, applications are released with undocumented capabilities, have features enabled not being used, or use non-standard ports and services. In all fairness, if we tried to QA the apps to perfection, we’d never actually get anything to market! Over time, these capabilities, features, and other bugbears come to the surface and are fixed by the vendor or, in other cases, discovered and exploited by those that don’t share my sunny disposition.
Restricting Administrative Privileges: In nearly every environment, there are accounts that have elevated privileges beyond the everyday users to add, remove, and change elements of the information systems. These accounts, including dedicated service accounts for automatic execution, yield considerable power and the ability to cause untold sorrows if used inappropriately. Some may consider only the administrator accounts used directly on servers or in Active Directory, but administrative privileges can be local, domain, or enterprise level, and have varying degrees of control (such as power users, domain administrators, and enterprise administrators to say nothing of delegated privileges). Beyond that, they exist on workstations, network appliances, and just about every piece of IoT technology. Absolute power corrupts absolutely…. or words to that effect.
Patching Operating Systems: One could probably argue that this is no different than Patching Applications, which I covered in Part 2 of this series. Yes, and no. Yes, because it is, in fact, applying updates and patches to your systems, and no, because the operating system is critical to making all the other parts operate in your environment. We seem to forget at times that our favourite applications, covered in part 2, must run on top of other software. We install applications into (or onto, depending on how you look at it) on operating systems. We also must think beyond just the ubiquitous “Windows” operating systems and consider Mac, Linux, Unix, and any of many other platforms (Novell, anyone? Don’t laugh…. it’s still out there!)
There are also the operating systems that run on our favourite mobile devices powered by Apple, Android, Blackberry, Microsoft, and more. We could also consider network devices and IoT, but I think I’ve made my point. Whether virtual or physical, the operating system is the heart of the computer. Think of it like a car: you may have the baddest hot rod on the block (app) but without the engine (operating system) it’s useless. Critical maintenance updates (think of safety recalls on cars), absolutely must be applied, or else bad things can happen to good people.
Like applications, operating systems don’t have the luxury to sit in QA for endless tests trying to sort out every little bug and detail that can and may go wrong when the stars align just right. So, surprise, surprise, operating systems have bugs. Some are an annoyance; some are a major security flaw. The vendors know this, and through their own means or through issues brought to them by people like you and me, they’re constantly seeking to make their product better, safer, and do more.
Unless you’ve been living under a rock, you’ve heard of WannaCry and Petya/NotPetya and how much of an impact it had globally. You’ve probably also heard how there was a patch that dealt with this available before the major outbreak even began yet, it cascaded around the world anyway despite there being a fix in place. I won’t go into the logistics (nor will I play the “I told you so” card; there are many that have been doing this so why pile on?) but it highlights the need for regular patching.
Disabling Untrusted Microsoft Office Macros: Macros are basically a batch of commands and processes all grouped together to make life a little easier when performing routine tasks. In many cases, they simply execute as the user and save untold hours, reducing the number of errors one can make with tedious tasks. Unfortunately, Macros are also a popular exploit through leveraging this autonomy and ability to execute code, reaching even beyond the application itself. Anyone that has been around for a long time will remember the Melissa macro virus and the havoc it caused with email services worldwide. Or even the Wazzu macro virus that altered the content of files. Most of this is due to Visual Basic for Applications (VBA) which is still used to this day. Microsoft, to their full credit, has done a tremendous amount of work to secure macros in the past several versions of Office. Of course, you can’t save people from themselves. I once had a car with advanced safety features but all the technology in the world wouldn’t keep me from driving off the road if I did it on purpose.
Application Hardening: Think of it kind of like spring cleaning on top of a minimalist lifestyle where you keep only what you absolutely need after taking stock of what you have. Many applications are installed with defaults (you know the Next-Next-Next-Next-OK approach) and as a result many options, services, and capabilities enabled. We’re all guilty of installing applications this way, being more interested in using the program than securing it.
Default user names and passwords, insecure services, default SNMP communities, anonymous access, and the list goes on. Hardening these applications renders them more secure and less likely to be used against us. We all have applications on our infrastructures that could have a negative impact is used incorrectly or maliciously, so reducing that possibility only makes sense. Controlling who can access an application, what the application can do, and periodically revisiting this on a regular basis or after significant changes is a good approach.
Multi-Factor Authentication: The short explanation is that it adds another layer of security by forcing you to provide another means of identifying yourself and in some cases, may include multiple means (it’s MULTI-factor, after all, and not just two-factor). So, what is the first factor? That’s usually your user name and password and while I have heard arguments that the user name can be one factor and the password another, I prefer to think of the two together as the first layer. Multi-factor authentication already exists in many other facets of our lives like when we apply to lease a property and must provide several pieces of identification.
Multi-Factor Authentication is not new, but it is gaining considerable momentum. Some of you remember the key fobs with a code that changed at set intervals. You entered your user name and password, and then the code displayed on the fob. It is assumed that only you have that fob and it provides a secondary way to identify whoever is logging in is who they say they are. It isn’t perfect, but it does improve security. While these fobs still exist, they appear to have been supplemented by (or replaced by) mobile apps, SMS codes, and other methods. Even smart cards are still very much in use.
On top of those methods, we’re also seeing the proliferation of biometric authentication into the consumer market through fingerprint scanners and touch-IDs on mobile devices (in all fairness, sometimes these look like a single-factor only but are usually underpinned by a user name and password during the initial setup – simply swiping your finger over the reader just reduces the other steps as the rest is “known”). We have a lot of options and given the current threat landscape, we really have no excuses to not at least consider it. If it’s available, use it. If you have a cloud-centric strategy, it’s quickly becoming a must rather than an option.
Daily Backups of Important Data: Backing up your data has been a long-standing strategy in safeguarding your information when things go sideways. Servers crash, laptops get lost, files get deleted accidentally, and mistakes are made. Mistakes, accidental or intentional, can have severe repercussions that require recovering your data such as in the event of a Ransomware attack. Whatever the reason, the fact remains you should have a backup copy of your important data.
There are many options at many different price points that will suit everyone from individuals to large enterprises. These include magnetic and optical media, cloud-based storage such as iCloud, OneDrive, and Box, and even all the way up to Disaster Recovery Sites. The latter can be fully functional exact replicas of production data centres with 100% live replication, to warm standby sites, to even cold sites ready to build from scratch and restore your data. The fact remains you have options, but you have no excuses.
Just as critical as backing up your data is the ability to restore it and use it without it being incomplete, corrupt, or completely inaccessible. It’s like a one-way ticket to somewhere you can’t get back from otherwise.
Where Should I Start?
Application Whitelisting: The first place to start should be understanding your information systems and which applications are needed to perform your business functions. If you don’t have this list already, please create it and engage a security specialist to help if needed. This will essentially become your “Whitelist”. It’s worth noting not every team in your organisation will use the same list…. there may be a core list (such as office applications) for everyone but different lists for other roles (such as Payroll and HR). Getting a handle on what applications you need and which you don’t want is crucial otherwise you can find yourself preventing good and allowing bad like a lousy B-grade superhero movie.
Patching Applications: As is the case with Application Whitelisting, a current inventory of applications is a must-have. We need to know what is on our network and why. Odds are the vendors of those applications have released patches and updates to address these issues, add features, and improve performance. Once we know what applications we have, we can investigate whether we have the latest stable releases and patches. In some cases, vendors are very proactive and notify their clients, supplying the patches at no charge during the lifetime of the application. Some charge extra for this service, but some just make them available without letting you know. In the end, patches and updates should be available.
Restricting Administrative Privileges: As you would have with Application Whitelisting, an inventory. A current inventory of administrator accounts is a great place to begin. It will take a while to get a thorough list of all your administrator accounts, but it needs to be done. Include accounts with elevated privileges and not just Local, Domain, and Enterprise administrator groups – consider power users and any users with delegated authority. While you’re at it, inventory your service accounts as well. Include the local administrator accounts on your workstations and whether users have this access. Finally, consider your network-capable devices such as routers, switches, firewalls, IoT, and so on. Any one of these can have many local administrator accounts. It may be a good time regarding these local accounts to evaluate your password strategy, but more on that in a future article. If it has administrator rights, it has power, and that power must be used wisely!
Patching Operating Systems: For Microsoft aficionados, Patch Tuesday is a thing and has been for a very long time, but that doesn’t mean that patches and updates aren’t available at other times. Find out from your team how patching is handled, how patches are acquired from the vendor, tested, and deployed. If it’s a case of just checking occasionally or whenever you have time, I’d suggest making this part of your regular security maintenance. Ask the questions and get the right people involved to understand your patching and updating strategy. Ideally, you want central control and distribution, so you don’t have 500 users downloading the same patch 500 times, let alone being a patch that may cause issues. Understand what the patch is, what it impacts, and if you even need it.
Disabling Untrusted Microsoft Office Macros: While it might be tempting to simply disable all macros, full stop, that isn’t the answer. Remember that macros exist for a reason and that’s to automate tasks, save time, and keep some of us from going loopy after doing the same thing a thousand times over. A better approach is to selectively trust macros but remove the choice from the end user. How do we trust macros? Digitally sign them and then lock down the application to disable all but the signed ones.
So how do I digitally sign macros?
This is where it can get complex. While there are tutorials about how to self-sign digitally signed macros, self-signed certificates really don’t inspire any trust in the broader community, so the availability of a PKI infrastructure, either internal using the Microsoft solution or external using a third-party trusted CA is preferred. Rather than bog you down in details, I would encourage you to start exploring digital signing of your macros and get the right people involved before moving ahead. This is a perfect example of when you need to put your hand up and ask for some help unless you have the in-house skills. On top of digitally signing and distributing your macros, you also need to consider policies that lock down these features in the office applications lest your users just go in and disable this protection anyway to run all macros. Yes, scary, I know.
Of course, in an environment that doesn’t need macros, go ahead and just disable them completely. I doubt, however, that many of these environments exist.
Application Hardening: If you have undertaken an Application Whitelisting exercise or similar that required a full inventory of your applications, you have a big head-start. Otherwise, it’s time to make that list. It goes without saying that if you don’t need it, get rid of it and you’ll probably start finding applications you never knew you had. List in hand, you can check with the vendors to see what their hardening recommendations are or even use the industry best practices to better secure your environment.
Something else that should go without saying (but I’m going to say it anyway) is to change default user names (if possible) and passwords if the application uses them. You’d be surprised how often this gets overlooked. If the application uses a service that is not essential, consider disabling it or – if possible – uninstalling that component completely, which can often be done through the installation wizard (if the app uses one). Use non-default program folders to fool exploits that go looking for default installation locations. Close network ports unless required and for applications that use random ports, try to statically define these ports and adjust your firewall and security policies accordingly.
Another good tip is to engage in vulnerability scanning using any of many commercial (or even open source) tools like Nessus, Nexpose, OpenVAS, SAINT, and so on. These will often locate vulnerable services that can be considered.
There is also patching your applications and enabling logging and auditing, but I’ll cover these separately.
Multi-Factor Authentication: Let’s assume that you already have a solid user name and password strategy and if you don’t, stop reading and make that happen first (as an aside, I’ve been reading a lot about length versus complexity lately and it may make for a future article). For the rest of us, we need to consider what we’re safeguarding as implementing MFA can be expensive and time consuming. Take stock of your present situation. You will probably find that you have some systems that are more critical than others, so that is where you begin. I won’t go into a detailed explanation on vendors and options; you can do that, but as with everything else, make sure you ask the right questions and get the right people involved.
Perhaps use of an authentication app will suffice such as those available from Microsoft or Google and can be installed on your mobile. Maybe you’re looking for a smart card solution, biometrics, or a combination of factors. Remember that while it needs to be secure, it needs to be usable. Fewer things can be more frustrating that taking what feels like forever just to log in. Combined with multiple systems that don’t share credentials, you’re just asking for trouble, so it may also be time to consider Single Sign-On options.
Spend the time up front to figure out what will be the most usable solution for you that will deliver adequate security, then set about implementing it in a phased approach. It may seem like a challenge but adding that extra layer can mean the difference between a hacker exfiltrating your intellectual property versus them moving on to a softer target.
Daily Backups of Important Data: If you have data, you need to back it up, so the first part is already determined. Depending on service level agreements and who is responsible for your data, either on premise, hosted, or cloud-based, many other factors need to be considered. How long can you be down before you must have your services and data available? How much work can you stand to lose in the event you need to restore? Figuring out your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) may determine your investment in the solution, and it needs to be a business-led conversation and not just technology. If you don’t have a plan, you’ll need to create one. If you already have a plan, it may be time to review it to make sure it meets your current objectives.
Determine what you need to back up in a prioritised order, and how to back it up. Will you do full backups every day or a full backup once a week with incremental daily backups? Will you use tapes, cloud, or replication to a DR site? Will you rotate media off site on a regular basis and how quickly can you get that media back when you need it?
The backup itself is just a small part of the overall solution. Your Disaster Recovery / Business Continuity Plan (DR/BCP) needs to address a lot of moving parts and remove single points of failure. For example, if John is expected to be the one that kicks off the restore but he’s in Bermuda on a fishing trip without his mobile, someone needs to do his job.
Regular testing, including full-scale DR exercises, are highly recommended. Whether you need to restore a file for someone in HR or recover a 10 TB database, your system MUST work.
How Do I Make Them Work For Me?
Application Whitelisting: You probably already have the required hardware and software to make this a reality. Most modern endpoint protection applications, such as those from Symantec, Kaspersky, Sophos, and McAfee can perform application whitelisting. Modern UTM firewalls that offer application control are not really “Whitelisting” but can add another layer of defence if you choose.
It’s time to take stock and figure out what your business needs and what it doesn’t want. That comes down to what makes your business tick – the very applications you rely on.
Patching Applications: Once you have a current inventory of your applications and a reliable change management process in place, it’s time to begin (or at least keep going) with patching your systems to the current stable releases. Remove or replace any unsupported applications and make sure they’re included in your application whitelisting solution. Create a list, subscribe to alerts, or at the very least ask your vendors to notify you of updates and patches so you can include them in your regular scheduled maintenance. When it comes to emergency or urgent patches, treat them as a priority. Recent incidents with WannaCry and Petya/NotPetya should have highlighted this.
Take a deep breath, and realise this isn’t going to happen overnight. Get the right people involved and don’t hesitate to put your hand up if you need some help. Begin with your current application inventory and if you’ve recently undertaken an Application Whitelisting project, you should already have that. Prioritise your applications and make sure you have the latest stable version of each. If you are a few versions behind, acquire, test, and deploy the patches using your change management process. Rinse and repeat!
Restricting Administrative Privileges: Technically, it’s easy, but I’ve yet to find someone willing to blindly start revoking administrator rights (or granting them for that matter) arbitrarily. You need a rock-solid policy to underpin this strategy and it must be supported and enforced by management. The roles of staff should dictate what they can and cannot have access to. Where possible, use security groups rather than assigning admin rights to individual accounts…. it’s easier to move users in and out of groups than worry about individual accounts. Always remember to ask “why” the administrator privileges are required in the first place as it should be backed up with a solid business case.
Take inventory and then review the roles that have administrator privileges. Review your policies, plan, run it through proper change management, and then just get moving with the clean-up. And take your time…. this won’t happen instantly or overnight.
Patching Operating Systems: If you’re not patching your operating systems, start doing so. There are plenty of applications available that can scan your network, identify the patch levels of computers, and provide a report to advise which systems need which patches. Get those patches, test them, and deploy them but try to automate the process as much as possible. There will always be systems that cannot be updated or must be done manually. You may also need to get management involved to help enforce the idea that computers must be patched, and users cannot simply ignore the updates because they will put more than just themselves at risk. While it may be tempting to spend time evaluating every single patch that gets released, perhaps consider working with someone that understands your infrastructure, like a managed service provider, and have them either provide advice or look after the patching entirely.
Ask questions. Find out what your patch management strategy for operating systems is and ask if you can do anything better. Talk with managed services providers and specialists in patch management. Implement a regular, scheduled patching regime and allow for the occasional emergency update. Include change management process in the strategy. Decide which patches are needed, test, and deploy. Happy Days!
Disabling Untrusted Microsoft Office Macros: Determine if you need macros. If no, then happy days, just implement a blanket policy to disable them across the board and move on. For non-domain systems, just disable them in your applications. For the rest of us, and likely the majority, that need macros, it’s time to take inventory of the macros we use. Delete the ones we don’t and begin the process of vetting the ones we do. Digitally sign your required macros after thorough QA and testing, and then distribute and control as needed. Ideally, we should never execute an untrusted macro unless we’re the ones that developed it and are trying to make it legitimate. Once these hurdles have been crossed, you can get back to unhindered productivity and make it out of the office before midnight.
Find out what your current policy is on Microsoft Office Macros and if you don’t have one, consider creating one. As I mentioned earlier, this can be complex with a lot of moving parts so unless you have the resources like in-house skills and PKI, put up your hand and ask us to help you. If you have the resources, look at locking down your macros and controlling their distribution and the end user control over the applications. People are very skilled at Googling how to bypass security settings and pushing their limits. Logging and alerting may be a worthwhile side project to this as well. For those of you that already have all of this in place including digitally signed macros, it’s time to run a health check on your current state to make sure it’s still doing what it’s supposed to. Nothing in this world is even set-and-forget!
Application Hardening: Once you have an inventory of applications, find out how to secure them using either vendor or industry best practices. Test these changes to understand what you can and cannot do and then run them through change management bearing in mind the benefits and any potential negative impacts. Office politics will always be present when dealing with issues of control, so management support and enforcement is a good idea. Once the logistics have been looked after, set about implementing the changes. Unless you can control the changes through large scale distribution (such as AD Group Policy) it can be a bit cumbersome. Putting all required hardening into a base image helps, followed by implementing the hardened applications through distributed software points, so the hardening is already embedded.
As with most things, begin with a current state inventory to understand what you have. Understand how best to secure these applications (and other devices if you wish) and create a plan to address these issues. Perform proper testing and QA and ensure that proper change control is followed.
Management support is important, so it is seen as not just an IT approach, but a business approach. Work your way methodically through the systems with a goal of allowing secure functionality of your applications. Regular reviews, such as after major upgrades or staffing changes, are also recommended.
Like a good spring cleaning, get rid of anything you don’t absolutely need!
Multi-Factor Authentication: Start with a plan. Implementing MFA is important, but it needs to be done for the right reasons and implemented correctly. Evaluate what you are protecting and why and begin to get the users involved very early on – the last thing you want to do is drop it on the staff suddenly. We humans don’t like change! Evaluate your options and thoroughly understand the pros and cons of each solution. If you need help, consult with MFA specialists who can help you find the best solution using the right combination of vendor products and services. For some of you, you may already have the capability through existing services such as Microsoft subscriptions. Trial your solution with a pilot group, learn from that experience, then begin a phased roll-out. Throughout the whole experience, always bear in mind the end users who will have to use the solution. In an environment with many systems, you may need to also consider Single Sign-On as well.
Ask the questions to determine what your present stance is on MFA and if you don’t have it, ask if you should. If you already have it, ask if you can do it better or more securely. Always be willing to go back and re-assess, aligning your security posture with the present threat landscape. Once you have the answer to these questions, act.
Daily Backups of Important Data: Rather than just jumping straight into backing up files, make sure you have a plan in place and ideally this should be a part of your overall DR/BCP. Identify what you are backing up and why, the priority of the data, the recovery time and recovery point objectives, and how it is being backed up. Equally important is how it gets restored and by whom, when, and where. Don’t overlook the value of annual full-scale, live DR testing and regular revisions to the plans. Also remember to include any new systems and their data as well as any storage location movements. Vendor support and even support by a managed services organisation can be worth every penny.
Ask the questions and get informed and if need be, get the right people involved. The ability to backup and restore critical information can mean the survival of your enterprise. Among the essential eight strategies, this one has probably been around nearly the longest but is probably also the one that gets overlooked the most. Make sure that any future changes to your data includes a section in change management to consider the backup and restore impacts.
What Are The Pitfalls?
Application Whitelisting: Many, which is why I recommend getting the right people involved and this means more than just the IT team. Management also needs to support and sign off on this initiative. Having it as part of your information security / general IT policies is also recommended. You need to know exactly what applications are on your network and which ones are needed. It’s not an easy voyage, but one worth taking. At the heart of it, executing code is the cause of a lot of breaches. Also consider that it’s not always malware; sometimes your own tools and utilities can be used against you!
Patching Applications: Without a doubt, Shadow IT can bite hard here. If you focus only on the “known” and approved applications, you may overlook the one-off applications downloaded to perform some task not officially sanctioned by the company. Even these one-off systems should be updated (or preferably removed until their existence can be justified and approved) in larger enterprises, patching applications can become all-consuming as it seems there are updates every day. A solid change-management process to test, schedule and deploy updates and patches on a prioritised basis is a must-have.
Restricting Administrative Privileges: There are plenty of things that can go sideways when it comes to restricting administrative privileges. Service accounts can break, so be sure you maintain the level of access required by the services and vendors. Maintain a secure local account on your network equipment in the event it cannot reach the domain for authentication or else you may find yourself unable to fix a router or switch quickly. Failing to remove administrator access for employees that change roles or leave the company and are not deactivated can cause hours and hours of “fun”. There may be accounts with administrative access to the most obscure things but ultimately, restricting the ability of a hacker to run riot on your systems, having a degree of accountability when changes are made, and giving people pause-for-thought before “clicking OK” is a solid strategy. There are tools available to help and bringing in the pros to untangle the mess can be worth its weight in gold. A good password management application is a big plus, too.
Patching Operating Systems: Plenty, but probably more from the perspective of not actually doing ANY patching at all. Not every update fixes every problem and sometimes, they can cause other issues which is why patches should be tested prior to deployment unless it’s critical and you can’t wait. Scheduling of patches needs to be handled right because you don’t want to reboot someone’s computer when they’re trying to make a deadline or have open documents with a lot of unsaved changes. Things can and do go wrong, but like wearing your seatbelt, I prefer the odds of having it on over not doing anything.
Disabling Untrusted Microsoft Office Macros: There can be a lot of moving parts here, so a plan is critical. Consider group policies, restricted privileges, macro control and distribution, digital signing and PKI and you will quickly see how many places you can come off the rails. Please don’t throw this in the “too hard bucket” because there is a lot to gain when macros are managed correctly, especially in an environment where the productivity can be impacted tenfold by their proper use but a hundred-fold by their exploitation.
Application Hardening: Many, because unless you harden your applications correctly, you may be effectively committing a denial-of-service attack against yourself. Some applications may just need “insecure” services or settings which will have to be accepted but can be guarded using a defence-in-depth strategy. Ensure that your approach allows for functionality as well as security because the most amazing applications are pointless if we can’t use them due to security settings. Asking the right questions to the right people, testing, and change management are crucial.
Multi-Factor Authentication: Unless your organisation is Greenfields, you will need the implementation to be gradual and well received by those used to just typing in their user name and password. Hopefully by now you’ve already managed the nightmare known as password complexity requirements. Users may often see this as just another obstacle in getting their work done, so education as to the “why” is beneficial (and just scaring people or using an “or else” approach helps no one). We’re all very much attached at the hand to our mobiles these days, so this may be the preferred approach. Many vendors make some slick mobile MFA solutions (which I would prefer over SMS but at the end of it, something is better than nothing. For now, at least).
Be prepared for resistance from users that refuse to install company-mandated apps on their personal devices. Even if you allow them to expense part of their devices, it is intrusive. Policy can help, or you can consider other means such as SMS, biometric, smart cards, or old-school fobs, but be ready for some politics.
Daily Backups of Important Data: A common pitfall is not adjusting backups to allow for new servers, data stores, or applications, so when new systems and new data come online, they’re not captured in the backup scheme. Also, commonly overlooked are device backups such as firewall and router configurations so if a device falls over, its replacement or the device itself can be quickly brought back up to speed. Another common pitfall is backing up everything…. just because. It’s all well and good to capture every tiny bit of data, but not at the cost of bandwidth, storage capacity, or at the risk of over-writing critical information. Plan, execute, review, adjust the plan, and repeat.
Are There Any Ghosts In The Machine?
Application Whitelisting: It’s us, plain and simple. At the end of the day, we just want to do our jobs, get paid, and go home to our families. Be ready to uncover shadow IT and related shadow data that often arise because of shortcuts (well-intended or otherwise) that we use to get the job done. Application Whitelisting can really help secure the environment but be prepared for some resistance from the masses.
Patching Applications: We are fooling ourselves if we think we can secure every application perfectly; risk will always remain. The key is to reduce the risk inherent in using applications to an acceptable level. Where the possibility to interact with an application exists, so does the ability to exploit the same. Technology was created by humans so human error is innate.
Restricting Administrative Privileges: Politics, plain and simple. Administrative access is a powerful element of a user’s psyche and taking it away can open Pandora’s Box, but at the same time, also be the key to locking that very same box. Be ready for the battles that come with taking away admin rights, especially at the workstation level. Admittedly, Application Whitelisting can only help at an endpoint level so far by controlling installation and execution of programs. You can consider separate privileged accounts for those times when the user “must” have it and the service desk is swamped. Managers and Executives often demand administrator rights, so tread lightly and fully understand why before arbitrarily granting the power to the powers that be. Auditing and logging systems for privileged account activities should be thought of as well so when (not if) things get a little scary, you can follow the audit trail and make resolution a bit easier.
Patching Operating Systems: Human error will always be a factor. We will overlook patches, miss computers because they were offline, incorrectly assign patches to computers that don’t need them, and no doubt we will always find at least one user that simply cannot be interrupted or can’t be bothered rebooting their computer. Implements some checks & balances to help mitigate these potential landmines.
Disabling Untrusted Microsoft Office Macros: The macros themselves must be trusted because as you can imagine, if we make a mistake and then trust that mistake, digital signing won’t make an ounce of difference. You must QA the macros and thoroughly test them before using them. Human error, as with all things, is omnipresent.
Application Hardening: Shadow IT seems to creep into our systems using grey applications, which are neither explicitly approved nor denied for their use in the infrastructure. These “unauthorised” programs can provide a quick and dirty workaround but unless secured, can present a bigger risk to your environment. Shadow IT exists often when users feel the tools they are given are inadequate or unduly restricted among many other reasons.
Multi-Factor Authentication: As with everything else, we humans seem to get in the way of perfect solutions. We lose our phones and are unable to log in. The same goes for smart cards and fobs that get left at home or lost. Even technology itself can let us down, so even if you have your phone but your battery is dead (which seems to happen a lot) there are plenty of ghosts. Always have a “Plan-B” to make sure users can get in when they need to. This is doubly critical for management and executives who may often refuse to accept there is an “issue” that prevents them from getting their email and logging in to their computers.
Daily Backups of Important Data: The list of things that can go wrong is extensive, but simply assuming the backups will work every time is hazardous. As with all technology, things can and do go wrong. We all have stories about how our backups let us down at the worst time possible. You simply must stay on top of things, even if it’s feeding the logs into another system so we can quickly check the status of our backups and right the ship, so to speak. Like a good insurance policy, we need it to be there when it matters.
Is There Anything Missing?
Application Whitelisting: Make sure you have the endpoint protection applied to every host that you can and think beyond just workstations…. locking down the ability of applications to execute on your servers – especially database servers and web servers – can be an invaluable tactic.
Patching Applications: While this approach seems to consider the current state, make sure to include any new applications as soon as they hit production. Even the latest and greatest systems will be updated at some point. Also, don’t overlook the software and firmware that run on your network appliances, physical and virtual. The programs that run your routers, switches, firewalls, load balancers and so on are still applications.
Restricting Administrative Privileges: If there is one thing you shouldn’t miss, it’s the presence of generic accounts that have administrator privileges – watch out for these! I advocate against generic accounts but if you *must* have them, restrict them as tightly as possible and log everything they can do. Also, wherever possible, try to leverage your directory services as the “source of truth” when logging onto network appliances. Changing the name of default administrator accounts doesn’t hurt either. Oh yes… remember good password practices lest you’ll end up with a hacker on the core switch using “admin” “admin”.
Patching Operating Systems: Just remember the non-Windows systems such as Linux, UNIX, Mac, and mobile platforms like Apple, Android, and Blackberry. If you haven’t included network devices and IoT in your application patching strategy, include them here. They’re all part of your extended family!
Disabling Untrusted Microsoft Office Macros: By the way, it’s worth considering macros in applications other than office. Microsoft isn’t the only ones that figured out macros are incredibly powerful!
Application Hardening: It doesn’t hurt to look at network appliances that may be running default services that are not used or may be insecure. Network printers & multi-function devices, UPS systems, routers, switches, and more may be considered if one is undertaking a hardening exercise. FTP, SNMP, HTTP, TELNET and more are often running on these devices and may present a risk.
Don’t overlook patching your applications and enabling relevant logging and auditing.
Multi-Factor Authentication: In addition to considering mandatory work-arounds for those times when something gets a little sideways, you really need to consider the personal angle. Use MFA on everything you can – email, Social Media, banking, and so on. Be ready to defend yourself as an individual as well as your enterprise. Most popular platforms such as Outlook, Gmail, Facebook, Twitter, and more all leverage MFA, so do yourself a favour and set it up. A personal breach may give an attacker enough information to launch an attack on your enterprise – especially if you’re in the management tier of your organisation and a more attractive target.
Daily Backups of Important Data: While you’re at it, it’s time to evaluate backing up your personal data. Far too many of us fail to back up our home data and files, so with a wealth of cheap & cheerful options such as personal iCloud, OneDrive and GDrive, we’ve plenty of options. Just be wary of your bandwidth usage and it may be time to look at your ISP options…. you may even save a few dollars!
Bonus Points: Watch out for data stored on local drives of workstations and laptops…. anything business important should be stored on the corporate servers. I’ve seen a few instances of a staff laptop crashing only to lose vital work documents with the online copies several months out of date.
Disclaimer: The thoughts and opinions presented on this blog are my own and not those of any associated third party. The content is provided for general information, educational, and entertainment purposes and does not constitute legal advice or recommendations; it must not be relied upon as such. Appropriate legal advice should be obtained in actual situations. All images, unless otherwise credited, are licensed through ShutterStock