Our clients frequently ask us "is my data safe?" Here's an overview of our security practices, and how we keep your data safe, accessible and available. If you have further questions, don't hesitate to contact us.
Obviously, hackers read these pages too. We're not going to disclose all our security practices — only those that any smart hacker can figure out all by themselves anyway.
Found a security threat?
*If you feel you have found a security exploit, learned about a new threat model, or want to report a security incident, please contact us immediately. We will keep all your data confidential. You can send us an email at firstname.lastname@example.org, or call us any time at +49 157 3432 5347. We will deal with your reported issue immediately.
Our software runs on the Google Cloud Platform. This is a Platform-as-a-Service, which enables application developers to focus on creating their application, while Google takes care of hosting it, providing the database, doing automated backups, logging, auditing, physical access security, and so on. Load-balancers distribute the load, and start and stop additional virtual machines when needed.
App Engine uses Java 7, and Jetty as the application server. All static files are automatically hosted by the Google Content Delivery Network. Servers can be fired up at any Google data center location, which means even downtime in one data center doesn't affect the application, as it can continue to serve from another center.
Google is constantly auditing its services, and has approved to be
compliant. Read more here.
The data is by default on US Google servers, but we can also host them on a EU Google server, just let us know.
Passwords and cookies
Passwords Your passwords are never stored in plain text, instead they are encrypted ("hashed, with salts") using the BCrypt algorithm. In layman's terms, your password is garbled beyond recognition and then saved. It cannot be decrypted without huge effort. Upon login of a user, the password they enter on the login screen gets encrypted in the same way, and then the encrypted versions get compared to verify they are the same. This also means we cannot recover passwords for you, you need to reset them, since decryption is basically impossible.
If you don't want to use our system to store passwords at all, you're free to integrate with OneLogin or Google Apps via SSO as well, then the passwords will be managed in their systems (or in your own LDAP or AD if you configure those systems accordingly)
Cookies Cookies may be used to authenticate, but like Amazon or Expedia or the other major players, the cookie-based authentication does not store your actual password in the cookie. All that gets saved is a randomly created token that allows you to log in and access basic functionality. But if you want to access security-relevant settings like password or email settings, or the administration features, you'll still be prompted to provide your actual password.
If security is paramount, you can switch off the Remember Me functionality entirely for your company account, you'll find that option in the advanced settings dialog. You can also enabled 2-step verification for specific (or all) accounts to protect them even further (see below).
2-Step Verification We're supporting 2-step verification using Authy, which support SMS tokens and a mobile app for token generation. 2-Step verification can be enabled on a per-user basis for key users, and enforced for all admin/IT staff at once, and will thus force a user to verify each new device they want to connect to Small Improvements. So even if a password has been compromised (e.g. stolen from another service at which a user used the same password) an attacker would still not be able to log in into Small Improvements. Learn more on the documentation page.
All data is encrypted during transit using https/SSL. In addition, we're encrypting string-based content such as the written feedback, objectives, performance reviews in the database on a per-field basis, using symmetric AES-256 encryption. We're not adding extra encryption to short fields like names, email addresses or integer or boolean fields, since symmetric encryption of a boolean or integer just doesn't make sense (it's guessable within seconds) and we need to perform searches on fields such as names and email addresses, and that wouldn't work if that data was encrypted. But the data that's crucial and must not get exposed at any cost, for instance the content of objectives or of feedback, is of course encrypted.
The encryption/decryption process happens on the server, in the so-called service level, before and after accessing the database. Sometimes we get asked why it doesn't happen on the client already. In the case of symmetric encryption, it doesn't help to encrypt on the client, because other clients need to decrypt the data as well, and so the decryption key would have to get distributed to the clients too, which would actually be less secure. We're keeping the key on the server only.
Asymmetric (public-key) cryptography works differently: A user has two keys, one is shared publicly and is used to encrypt messages to that user, while a private key is kept local and never shared with anyone. However, since the private key cannot be shared, it cannot be managed inside a cloud app, but by the end user only. And since the private key needs to be several hundred characters long, you also need a secure "wallet" to store it and share it between devices. Also, all users would need to manage their private keys. Which is very time-consuming to set up, so most users wouldn't use the system, and hence almost no B2B cloud services use this approach yet.
In addition you (as the admin) can enforce further security mechanisms:
Password length enforcements By default we're enforcing passwords to be at least 6 characters. In addition, we're giving users an indication of how secure their password is. 6 characters is pretty short, so you can define a company-wide policy of minimum password length, for instance forcing passwords to be at least 10 characters long. The setting can be found in the Global Settings dialog, in the advanced tab.
IP Range restrictions If you'd like to restrict access to your company account to a certain IP range (e.g. your office plus anyone who can log in via your VPN) then you can restrict those IP ranges as well. While some people argue that IP addresses can be tampered with, it's not trivial to achieve this for more than a single request (the response from the server will get sent to the IP address the attacker pretends to be at). So IP range filtering adds to security as well.
Instant lockout If you disable a user or change their password, their session lives on for a few minutes if they are already logged in. If you need to lock the user instantly, you can also kill all their sessions from the security tab on the user's profile. This is also useful if only one device of a user went missing: Simply lock that tablet or phone's session.
Using secure components
A secure data center alone is not sufficient of course. The software must be based on a secure foundation. We only use battle-hardened libraries that are safe by design, meaning we don't need to write all the security code by ourselves, it's baked into our tools already.
Cross-site request forgery (CSRF or XSRF) Our web framework Apache Wicket also effectively prevents XSRF attacks by embedding tokens into forms and AJAX requests so that you cannot replay attacks, nor intercept a call and execute it from another session.
SQL Injection The absence of SQL means that SQL injection attacks are not possible. Google App Engine is based on a so-called Big Table datastore, which is essentially a hash-table of hash-tables. While its query language GQL resembles SQL a little, it's by no means as powerful, and thereby not suitable for SQL injection attacks. Also, we're using middleware called Objectify, and all input gets sanitized too.
Web server exploits Our software runs on Jetty, which runs inside the well-protected Google data centers. Google keeps these Jetty servers up to date, and in case a security vulnerability is detected, it will get fixed immediately, since the entire Google cloud computing business model would be at stake.
SSL Our entire website (even the normal documentation pages) enforces secure connectivity via SSL (https), using a Thawte Extended Validation (EV) certificate.
Preventing unauthorized access from within
Even an authenticated user (e.g. someone who rightfully logged in) may of course try to stage an attack. For instance, an attacker may sign up for a trial account, and try to hack their way into a different clients' account. In addition to using safe libraries as outlined before, there are several levels of protections against this happening in the application code itself.
Each database object is internally associated with a company ID. Even if an attacker was able to steal another clients' objects' IDs (using social engineering on another company's employee), the attacker would not get access to these objects because our product always check if the viewer originates from the same company ID as the object they are trying to look at. Whenever our application notices a mismatch, it immediately throws an exception and notifies our administrators. The check is performed for each object access, and each SI page typically displays dozens of objects. A single mismatch out of a hundred object accesses will cancel the entire page operation.
Strong role-based model The other attack vector would be an unhappy employee from within the system. To prevent this, each individual screen of the application and even each single building block is protected by a role-based mechanism as well. Only administrators can access admin screens in the first place. Some screens (like the "data reset" screen) even require admins to get in touch with us, so that a disgruntled administrator cannot simply export or erase your data.
Preventing social engineering and attacks against SI staff computers
Many attacks these days are not targeting the server, but work by tricking staff into downloading and running infected software, or visiting sites that have been compromised and which install malware onto visitors' computers. There are several ways how we reduce risks:
We're a small team, so it's impossible for someone to pretend you're someone important from another business division. A common social engineering technique is to place a call like this: "Hi, this is Joe from the IT department. We're seeing unusual activity on your computer, can you please visit the link I just sent you by mail, to install the latest anti-virus software". We're way too small for this to happen.
We're security-aware and inform all new staff to be cautious. Even if an attacker impersonated the CEO and sent an unrequested (and infected) file by email, claiming "open the file to check out our latest business stats" for instance, the recipient would ask for confirmation. We use internal systems to share documents, so an unsolicited file in a odd-sounding email will raise suspicion instantly.
We're using only the most up to date browsers (Chrome, Firefox and Safari, to be specific), which are substantially more secure than older versions, let alone Internet Explorer version.
We're keeping our operating systems up to date as well. We're mainly on Apple and Linux, which are are substantially harder to attack than Windows. Just to be trusting the OS would be foolish, so we're using further mechanisms to keep our computers protected.
We use different passwords for every service we use, and the use of password keystores also helps against keyloggers. So even if some site like LinkedIn gets hacked again, it doesn't matter that much since we use different passwords all the time.
Our development and deployment machine's hard-drives are encrypted, so even in the case of theft, an attacker wouldn't be able to reverse-engineer our sourcecode and upload a compromised version of Small Improvements either.
There are lots of other mechanisms which we won't discuss on our website. We're not claiming to have super-human abilities, but security is our biggest concern, since we'd be out of business if someone managed to hack us. So there are plenty of other items we're considering when developing our features, training our staff and deploying new versions.
Full disclosure policy: In the case anything should ever happen, we will fully disclose the incident to minimise damage. Our previous experience at companies such as Atlassian shows that even if a security breach happens transparency is the only way to deal with it, informing customers what has happened and how to take precautions.
Access restrictions to our database
Our database is hosted inside Google Data centers, and thereby in some of the most secure places possible.
Non-physical access to our production database is severely limited too. Only 3 people can upload new software releases and only 4 can view the actual raw database. Access to the live database and to the servers is restricted to computers that have been authenticated by two-factor authentication. Even if someone managed to retrieve the master password, it would still be useless without that two-factor authentication access code.
Access to the administration website is equally restricted. Only the aforementioned administrators can access all global admin pages, while support staff can only see very restricted "general information" about a client (like how many users, who logged in when, what review cycles exist, etc). Regular support staff cannot analyze a customer's actual content (like actual performance reviews, messages, etc). This functionality is restricted to the people who can access the raw database.
It is our policy to not look into customer data, unless permission has been granted by the customer to help troubleshooting a bug. Most bugs we encounter can be fixed by reproducing the situation locally, and by analysing the logfiles and stacktraces, which do not display customer data beyond IDs and basics like employee names.
Google App Engine is hosted on a highly distributed network across Google Data centers. The data is constantly replicated as well, so even if an entire data center goes down, other still have all the data, and continue serving requests without any end user noticing.
We create daily backups as well of course. So in the event of catastropic failure of all data centers, or in the case of a grave programming mistake that accidentally wipes data from within our application, we can resort to the backups. We store these backups on an entirely independent service of course.
We have never had to use these backups in the 2 years of our product's existence, but we frequently ensure the backups actually work, by restoring them onto a separate server and ensuring the data is all there.
We are able to do selective restores by the way. For instance, if a programming bug caused errors only on one database table, e.g. corrupting only the performance reviews, and if we only noticed the corruption 3 days later, we'd still be able to restore the performance review table 3 days later, while leaving all changes in the 360 degree reviews unchanged. This way, only performance reviews created or changed in the past 3 days would get reset, limiting the damage a lot.
Note that we are not able to restore just one single company's data. So if you delete some of your data by accident, it is actually getting deleted. Just because we could restore the entire database, it doesn't mean we can restore that one user you deleted.
Should we be sharing this information at all?
While it may sound like we're explaining all our secrets here, we're not. Any hacker worth their salt would find out about the things we mention here pretty soon. Some our our clients believe we shouldn't mention what webserver or application framework we're using. However, there are tools that help an attacker figure out these things within minutes, so there's no point trying to hide that we're on Wicket and Jetty.
Security is the most important aspect when choosing a cloud platform. But there are other related topics that deserve a mention. An application needs to be more than secure, it needs to be available and functional as well, you need to get support for issues that arise, and so on.
We picked App Engine for the very reason that it's optimised for availability. An application needs to meet quite a few criteria so it can run on App Engine, and this is mainly because downtime is unacceptable, and certain coding practices are just not possible on App Engine. In return your application is always available under normal conditions.
Our entire business model is geared to providing the best user experience possible. Availability is a key ingredient. We're typically achieving 99.95% according to our Pingdom tracker (see the details here).
We will do whatever is possible to keep it this way, and fix any downtimes with the highest priority, in the middle of the night if necessary. Our own releases do not require downtime. We roll out small upgrades continuously a few times each week, and conduct larger upgrades on the weekend only. Downtime can not be ruled out entirely, but if for some reason we're offline for more than a few minutes, we'll write a post-mortem on our blog and reimburse customers who were affected, for instance by providing a free month of service. An example of a post-mortem can be found here: 2 hours of downtime on October 23rd 2014
App Engine scales transparently. If more users use the system, more server instances are automatically started, and join forces to handle the load. Typically, 2 server instances are sufficient to balance the load between them, but on certain deadlines the load goes up, and we see maybe 3 or 4 instances in operation. Startup of a new instance takes less than 15 seconds. Other applications running on App Engine have however been reported to use 800 instances under great load, so future growth will not be an issue. We won't get tied down with performance tuning or deadlock-fixing just because we're successful and more customers sign up.
We typically respond within 24 hours to "normal" questions that don't seem urgent to us. We try to reply within 2 hours to urgent questions, e.g. if an administrator is stuck and doesn't know how to proceed with performance reviews that are due within a day or two.
Our company is based in Berlin, Germany, but our support team is distributed across Berlin, Chicago and San Francisco, so we cover Europe and the US during all business hours. APAC support is limited to late evenings and early mornings local time.
Check our contact page for our phone and email addresses.
Overall Product Quality
Even if a system is up and running, program errors ("bugs") may occur that prevent certain features from working. We take this just as seriously, and are doing whatever we can do ensure the highest quality standards.
We place a lot of emphasis on automated testing. Each important piece of our software is double-checked by a complementing piece of software, that ensures the original code works well under expected and unexpected conditions. We are using JUnit to create a large battery of automated tests, which we run before upgrading our systems. By creating automated tests, we future-proof our code against future failures and errors.
We also believe in code reviewing each other's code. Not every line needs to be verified, but we find it very valuable that we all follow similar coding practices, to avoid surprises.
Once all automated tests pass and all code reviews have finished, we deploy our new software to the QA system, which we use for our internal SI instance as well. Then we roll out a new release to the staging version of the production system. There we can test new features using live data, while everyone else is still using the latest production release. Only when we're happy, we promote a release to production, and monitor it for a while. If anything goes wrong, we can roll back to the previous release within a minute.
In addition, larger features are typically subjected to a lot of user-testing, and we keep improving features even after they have been shipped. Learn more about our user testing approach on our blog. We monitor the logfiles carefully, and every exception is automatically sent by email to 2 lead developers, keeping them on their toes as well. We don't claim our software has no bugs, but we're extra sensitive to any issues that may occur, and bugfixing always gets higher priority than feature development.
App Engine comes with a sophisticated administration console, which includes monitoring and a logfile viewing system. This allows us to pinpoint issues within minutes or even seconds, even when there are dozens of parallel requests. We regularly scan the logfiles for unexpected errors too, and get in touch with end-users if anything went wrong, notifying them about the problem and about our plans on how to fix them. The system automatically sends email to the development team if a bug occurred, just to make sure no problem goes unnoticed.
We also use Pingdom to monitor latency and downtime from various locations across the globe. If you'd like to see our Pingdom statistics, we can give you access.
We keep an ever-growing internal audit log which helps SI staff get a good overview of what is happening in a client's system. The audit log doesn't contain performance/feedback information, but we track all events like logins, logouts, edits to objectives, assignment of permissions, mails sent etc, and it includes data like IP address, browser version and so on. The audit log is currently very technical and low-level, so it's not accessible to SI customers yet. We have plans to make the audit logs available to SI customers too via the application, until then we'll do an Excel export on a case by case basis if you need to know.
Data portability, deleting your data
You are welcome to create your own backups. You can download an XML file that contains all your company data, or you can download data per review cycle. You could for instance download just the XML file for all performance reviews done in the Review Cycle 2014. The XML file can be used to populate another system if you decide to leave our service. You will find the XML-download buttons at the end of the review cycle overview pages. The button for downloading all data is in the "advanced" tab of the global settings page. To ensure this is not misused by a disgruntled administrator, the download button is off by default, please contact SI support to enable the download button.
We also provide a few means of exporting data to CSV format, so you can further process it. This is available currently for performance review core data, for objectives, and for your user database. We're happy to learn about your additional requirements, e.g. if you plan to conduct additional reporting on the data, and to help you with that by adjusting the exports, or creating new exporters.
You may always decide to permanently delete your data. There's a button in the Global Settings dialog that lets you wipe all content. If you've been using our service for more than 4 weeks, this feature is however protected by an additional master password. You will only get this password if one of your administrators asks for it by email. We will check if we've been in touch with this person before, and if more than one administrator exists in the system, we may ask the other person for confirmation. We put this extra step in to prevent snap decisions by disgruntled or intoxicated administrators. After all, the data would really be gone, and our backups do not allow for selective recovery on a per-company basis.
Raising bugs, tracking bug
Every user is encouraged to report issues from right inside the application as well: there is a large black button at the bottom of each screen that solicits feedback. But if you want to report a major issue, attach screenshots or other documentation, sending an email to email@example.com is probably the easiest solution.
Do you need further information?
We are happy to answer more specific questions if you have any, and we're happy to extend this document too. Please get in touch.