Mail reports not sent this morning (solved)
24 June 2020
This morning the ipv6 address of one of our mailservers has briefly been listed on the SpamHaus CSS list. While regular ShadowTrackr mail was still functional, this did impact all email reports sent from the website. Our logs show that about a dozen reports have not been sent, apologies for that.
The problem likely occurred when someone receiving an email from that webserver marked it as spam. The problem is fixed now, and I’ll do an extra on all accounts and webforms/reports for problems.
Python API available on Github
23 June 2020
There has been a ShadowTrackr API for about 2 years now. Well, sort of an API. It was functional. You could get a feed with all notifications and put them in your SIEM. There were other endpoints, but their value was questionable and to be honest I never noticed clients using anything other than the notification feed.
Today version 2 of the API is live, and it has many improvements. You can query all your websites, certificates, hosts and whois records. The information per asset available through the API now is the same as through the GUI, and you get proper text descriptions of current problems and warnings. Of course you can still get the feed. Have a look at the
API documentation for the details.
The whole idea of having an API is to provide the opportunity to integrate ShadowTrackr with other security tools. And often that tends to happen in Python. So, to get you started there now is a
python package for ShadowTrackr, with the source code available on
Github.
If you have any special requests please let me know, and I’ll do my best to support them.
Scaling out for better performance
07 June 2020
If all went well you shouldn’t have noticed anything. Of the big migration that is. You should be able to notice the more stable and faster user interface of course!
Up until now most performance problems could be solved by just running on a bigger server. More memory, more CPU and problems went away. I always knew this would not last forever and at some point we’d be scaling out instead of up. So, fortunately, things were prepared.
Web and DB servers are split now, and if the backend gets busy the frontend will still respond fast. Most database clusters deep down just are fancy ways of serialising writes to one DB node and spreading reads over other DB nodes. ShadowTrackr now handles this at the application level for even better performance. At every DB query both frontend and backend specify if it needs to be a write query (done on the master DB node) or a read query (done on a slave DB node). This freed up much CPU. The many small servers now perform way better than the few big servers before the scaling out.
On top of this, the backend nodes spread around the world now have a shared cache. This reduces lookups to the central databases, and also reduces the number of queries the nodes send out to external APIs.
So, lots of improvements. Next up is the ShadowTrackr API, we’ll be adding functionality to add assets and query scan results.