Python API available on Github
23 June 2020
There has been a ShadowTrackr API for about 2 years now. Well, sort of an API. It was functional. You could get a feed with all notifications and put them in your SIEM. There were other endpoints, but their value was questionable and to be honest I never noticed clients using anything other than the notification feed.
Today version 2 of the API is live, and it has many improvements. You can query all your websites, certificates, hosts and whois records. The information per asset available through the API now is the same as through the GUI, and you get proper text descriptions of current problems and warnings. Of course you can still get the feed. Have a look at the
API documentation for the details.
The whole idea of having an API is to provide the opportunity to integrate ShadowTrackr with other security tools. And often that tends to happen in Python. So, to get you started there now is a
python package for ShadowTrackr, with the source code available on
Github.
If you have any special requests please let me know, and I’ll do my best to support them.
Scaling out for better performance
07 June 2020
If all went well you shouldn’t have noticed anything. Of the big migration that is. You should be able to notice the more stable and faster user interface of course!
Up until now most performance problems could be solved by just running on a bigger server. More memory, more CPU and problems went away. I always knew this would not last forever and at some point we’d be scaling out instead of up. So, fortunately, things were prepared.
Web and DB servers are split now, and if the backend gets busy the frontend will still respond fast. Most database clusters deep down just are fancy ways of serialising writes to one DB node and spreading reads over other DB nodes. ShadowTrackr now handles this at the application level for even better performance. At every DB query both frontend and backend specify if it needs to be a write query (done on the master DB node) or a read query (done on a slave DB node). This freed up much CPU. The many small servers now perform way better than the few big servers before the scaling out.
On top of this, the backend nodes spread around the world now have a shared cache. This reduces lookups to the central databases, and also reduces the number of queries the nodes send out to external APIs.
So, lots of improvements. Next up is the ShadowTrackr API, we’ll be adding functionality to add assets and query scan results.
Adding and removing assets through the API
03 May 2020
I love automating stuff. If you do this properly from the start you can do so much more work in so much less time. Really, any task you do more than twice should be automated if possible.
ShadowTrackr power users that want to automate things can now add and remove asset through the API. In bulk that is. Just throw a mixed list of urls, ips and subnets at it and it will validate, deduplicate and add it for you. Check out the details in the
API documentation.
And if you have any cool API idea I’m always happy to hear them. Have fun!