Saturday, February 10, 2018

UserCenter battle continues as Check Point account services are still failing to do their job properly

In my previous post I have already mentioned that my old account came back online. I have also received several notifications from CP account services.

The first one was hilarious. They have asked me to update my email with Pearson VUE before transferring my certifications that are already granted. After asking them if this is a joke, they reported that they have transferred certification history. Well, I had to check. Guess what...

Two out of 14 certificates were lost in the process. Every time account services answer, they are also closing the open case. I have had to reopen it twice already.

So far nobody picked up a challenge about email address change. Too bad, as all this hustle would be avoided completely, would I be able to change that bloody email myself.

However, I would like to ask one more question. What is wrong with account services and Check Point? Why are they failing to perform a simple task?

Update: The issue is finally resolved. 6 days and two escalations. For a simple email change. Fantastic job, Check Point, really well done


Thursday, February 8, 2018

Changing jobs? Brace yourself for impact of losing your UserCenter access

Probably the most annoying part of having and account with Check Point UserCenter is that you cannot change your email address.

Which is, please allow me to say it plainly, utterly stupid.

8 years ago the company I was working for, Dimension Data, had gone through a re-branding phase. All emails where changed from 'name'@'region'.didata.com to 'name'@dimensiondata.com. Considering hundreds of accounts for all company employees around the globe, the impact was huge.
Old email accounts were discontinued, so to fix this, we have approached Check Point with a request to re-assign logins to new new email domain. Guess what was the answer?

- No can do.

So hundreds of DD engineers, sales and accounting guys have had to re-create email alliances to continue working with Partners' portal and UserCenter. They are still using this method now, after those 8 years. It was easier to keep all email addresses afloat than redefine manually tons of dependencies and details.

That was about business. On a personal level there is also lots of pain. If you are changing jobs, be ready that Check Point will sever your access even if you ask them not to do that.

In my case, I have left Dimension Data at the beginning of 2018. One month before that I have opened a case with account services to move my certification details, CheckMates account and UserCenter access to another email address. Once more, the answer is:

- We cannot do that. Please open a new UserCenter account and ask to move your certificates there. 

They have also assured me that my old account will not be closed automatically. Guess what... It is no longer working.

The main implication with changing your email with UserCenter this was is that you lose your history and your CheckMates access. You will appear as a new user everywhere. You will have to wait till they figure out how to move your certification. And I suspect recovering expert access to UserCenter resources will also be a story.

I do not even want to speculate why an established security company cannot figure our how to change an account ID without killing it altogether in the process.

However, this is the reality we are facing today. If you are planning to change your job, make sure you download all your valid certificates and bookmark your CheckMates threads. Because you will not be able to keep all that intact after moving to another email address. Bugger...

I dare Check Point admins to name me a single reason why I cannot change my email address on my account.

Anybody out there up for the challenge?


-----------------
Update: My old account is operational again. Whoever is responsible, thanks a lot. The issue of transferring the access level and certification history to a new account is not yet resolved. So the challenge stands.






Thursday, February 1, 2018

The main cyber security questions of 2017 and the way to answer them

At the end of 2017 I was talking to some US based business analytics firm, and the main questions they asked was why.

- Why security budgets are not growing rapidly, after all that scare with WannaCry and NotPetya? 
- Why businesses are not spending more to protect themselves, aren't they scared now? 
- Why the impact was so hard, even for the customers with high end perimeter security systems?
- Why is it happening?


Well, let's start with the easy one. Businesses are scared.

They were scared long before 2017 malware rampage. In 2017 they suddenly realised it does not matter how scared you are. They reached the limit of fear. They have realised it does not matter how much you spend on perimeter security. It does not matter how well-known your vendor is, which part of the Gartner it occupies and how great is his marketing campaign. None if it matters. By the end of the day, a weak link will be found and you will be owned.

So business is doing what it's doing best - counting money. They have switched to a risk management mode. For what it worth, backup tech budgets were raised, not firewalls. Additional insurances and legal protection fees are on the rise, not perimeter security budgets.

The second why is also simple but not that obvious. Perimeter security solutions today are top-notch, but they are still failing the customers. You can have all the jazz: FW, IPS, Anti-Virus, sandboxing, and you will still miss something eventually. Or even better, business will not wait for your security cycle and will deploy something completely exposed, with, god forbid, SMB services open to the Internet.

Hello, WannaCry, here is your free lunch,  come and get it.

In the eternal struggle between security professionals and business the latter always wins. Why? Because, think about it. It is just the matter of money. Business makes money, security spends some of it. If from the business perspective cost to effect ratio is not getting better, additional spendings are at best questionable.

Yet, the major security vendors are still beating the dead horse. Every conference, every vendor event includes some scare presentation about malware on a loose, hackers success stories and slides with names and sums of damages in big red letters.

Well, good luck with that.

In Guardicore we take an alternative route. We protect your East-West traffic, securing later movements in your infrastructure. We enable business and speed up DevOps actions by applying dynamic labelling as part of micro segmentation security policies, we provide unprecedented visibility of your assets traffic and detect intrusion attempts and anomalies in real time. On top of all that, we provide dynamic deception to lure an attacker into a honeypot to make sure his tools and tactics are registered and blocked everywhere across the ecosystem.

The new age of security is here. You do not have to be scared anymore.

Tuesday, January 23, 2018

Come to my session at CPX in Barcelona


Hi all, if you are coming to CPX 360° at Barcelona, feel free to visit my session about hybrid cloud security practices. It happens on Thursday at 14:00 in the room 116.


Wednesday, January 3, 2018

Goodbye Check Point, Hello GuardiCore

Today is my last day with Dimension Data. Looking back to almost 10 years of my work there, I want to say thank you for all my colleagues and friends for their support, help and assistance through that time. I felt being appreciated and valued, I have had many interesting projects, challenges and wins. Later this week I will board a plane to Tel Aviv to join my new company: GuardiCore.  I have visited GuardiCore on September the last year while being on vacation in Israel by the invitation of Sharon Besser.  I falled instantly in love with the company, the technology and the team. At that point my departure from Dimension Data was only a question of time.  I am leaving a very comfortable place to embarque on a new exciting journey. I am also giving up my 17 years of Check Point engineering for a challenging world of cloud and virtualization security.  If you are concerned about your virtualized DC security, if you are seriously considering moving to a cloud, private, hybrid or public, feel free to ask for an advice. I will be happy to assist you into putting in place a brilliant and effective security solution - GuardiCore Centra.   I also have to add a note about my personal projects related to Check Point.

With this transaction, unfortunately, I will have to put to rest Check Point Expert Talks.

This blog will remain up, and I am still deciding whether I will continue it as it is or run a spin-off for cloud security only.

Your thought for the matter are appreciated.

Anyhow, wish me luck and stay in touch. We will have yet another good ride, people. This time, to the cloud and beyond.

Monday, November 6, 2017

Kernel debug Best Practices or "Why "fw ctl zdebug..." should not be used"

Over last several days I have seen rapidly growing amount of posts at CPUG and CP Community where "fw ctl zdebug..." command was mentioned, used and advised.

Although some of you already know my position for the matter, I have decided to write a post about the growing custom to use zdebug instead of employing full fw ctl debug mechanism.

Kernel debug in general


Check Point FW is essentially a Linux-based system with a kernel module inserted between drivers and OS IP stack. If you do not know what I am talking about, you may want to look into this post with an explanatory video for the matter.

Extracting information about kernel based security decisions is rather tricky, so Check Point developed an elaborate tool to read some info about various FW kernel modules actions.

In a nutshell, each kernel module has multiple debug flags that force code to start printing out some information. I have numerous posts in this blog explaining different flags, tips and tricks with kernel debug and also providing links to CP kernel debug documents.

Debug buffer


It is important to understand FW kernel is always printing out some debug messages. For most of the kernel modules, error and warning flags are active, and the output goes to /var/log/messages by default. This is not practical for debug, so before starting kernel debug, an engineer needs to set a buffer which would receive debug output instead of /var/log/messages file.

To do so, the following command is used: fw ctl debug -buf XXXXX, where XXXXX is the buffer size in KB. The maximum possible buffer today is 32 MB, but I advise my students to use 99999 to make sure they get maximum buffer possible anyway.

Kernel can be very chatty, so having a bigger buffer would ensure less kernel messages being lost.

Debug modules and flags


FW kernel is a complex structure. It is built with multiple modules. Each of the modules has its own flags. One can run a single debug session with multiple flags raised for several modules. To raise debug flags, one use one or several commands of this type:

fw ctl debug -m (module name) (+|-) (list of flags)

It is essential that + and - options allow you to raise and remove flags on the fly, even during an already running debug session. List of modules and flags can be found by the first link in this post.

Printing info out of buffer


Raising flags is not enough, as to get information, you need to start reading buffer out with this command:

fw ctl kdebug -f (with some options)

There will be A LOT of information, so never do this on the console. Use SSH session or redirect to a file.

Stopping debug


Once you collected the relevant info, you need to reset kernel debug to the default settings, otherwise you FW will continue printing out tons of unnecessary info. To do so, run

fw ctl debug 0

What is fw ctl zdebug then?

fw ctl zdebug is an internal R&D macros to cut corners when developing and testing new features in the sterile environment. It is equivalent to the following sequence of commands:

fw ctl debug -buf 1024
fw ctl debug (your options)
fw ctl kdebug -f
-------(waiting for Ctrl-C)
fw ctl debug 0

Why is this a problem?


If you are still reading this post and get to this line, you probably think zdebug is a god sent miracle. It simplifies so many things, it is the only way to run debug in production environment! Right? 

Wrong. To make it plain, here is the list of problematic point with this way of doing things:

1. The buffer is way too small. Lots and lots of messages might be just lost because buffers does not have enough room to hold them before read.
2. It is not flexible enough. Running debug in production requires lots of consideration and certain amount of caution. After all, you are asking FW kernel to do extra things, lots of them. The best practice is to start with a single flag or two and expand area of research in the fly trying to catch an issue. This is impossible to do with fw ctl zdebug macros.
3. It is too simple to use. You could say, what a funny argument. Yet, let's think about it. To master kernel debug as described above, one has to understand kernel structure, dependencies, flags and modules. You don't have to do any of that to run fw ctl zdebug drop, and many people do jsut that. 

And guess what, this is also the simplest way to bring your busy production FW cluster down. So no, do not try this at home or at your place of work, if job security is important for you. 


-----------
Support CPET project and this blog with your donations to https://www.paypal.me/cpvideonuggets 


Monday, October 30, 2017

Check Point researches dissect IOTroops Botnet

Check Point security research team has recently posted an elaborate and impressive report about IOTroops botnet.

The details and depth are fascinating. Highly recommended to read.

-----------
Support CPET project and this blog with your donations to https://www.paypal.me/cpvideonuggets