Sandstorm Blog

Sandstorm is returning to its community roots

By Kenton Varda - 06 Feb 2017

Most people know Sandstorm as an open source, community-driven project aiming to enable self-hosting of cloud services and to make it possible for open source web apps to compete with today’s cloud services.

Many people also know that Sandstorm is a for-profit startup, with a business model centered on charging for enterprise-oriented features, such as LDAP and SAML single-sign-on integration, organizational access control policies, and the like. This product was called “Sandstorm for Work”; it was still open source, but official builds hid the features behind a paywall. Additionally, we planned eventually to release a scalable version of Sandstorm for big enterprise users, based on the same tech that powers Sandstorm Oasis, our managed hosting service.

As an open source project, Sandstorm has been successful: We have a thriving community of contributors, many developers building and packaging apps, and thousands of self-hosted servers running in the wild. This will continue.

However, our business has not succeeded. To date, almost no one has purchased Sandstorm for Work, despite hundreds of trials and lots of interest expressed. Only a tiny fraction of Sandstorm Oasis users choose to pay for the service – enough to cover costs, but not much more.

We attribute this failure to two main problems:

We plan to publish a more complete postmortem in a subsequent blog post.

Unfortunately, Sandstorm the business has now run out of money, and we have been unable to raise more.

The Project Lives On

Although it will no longer be our full-time job, Sandstorm will continue as an open source project. We still strongly believe in Sandstorm’s long-term vision and cannot abandon it. I personally will continue to lead Sandstorm’s technical development: reviewing and merging pull requests, pushing releases, and developing new features. We will continue to operate Sandstorm Oasis – your data there is safe. Meanwhile, we will make it easier for our extended community to be involved in core development and decision-making. Jade will be in contact with individual community members to appoint community leaders and grant them the authority to handle a variety of community organizing functions, from App Market approval to organizing meetups.

Ironically, the pace of development may not even be hurt much. Over the past year, the Sandstorm team has spent a great deal of our time on enabling our business, e.g. building a payment mechanism, processing customers, marketing, and the like. I personally have spent far too much time on fundraising, sales, and other deal-making rather than on coding. With this shift in direction, we can now focus strictly on building out the core platform, getting more done with less time.

Immediate Action Items

As a result of this change, the following has happened today:

How you can help

Want to see Sandstorm succeed? Then, contribute!

Sandstorm Solutions: pay for feature prioritization

By Drew Fisher - 01 Dec 2016

Is there a feature or bug holding you back from deploying Sandstorm at your company or for your team? Do you need the Sandstorm core devs to prioritize a feature? Sandstorm Solutions can fix that.

There’s always way more on our roadmap than time and people to do it. We receive requests from potential customers around the world who would love to use Sandstorm, but need a particular planned but unimplemented feature, or hit a bug that affects them in particular. We now offer Sandstorm feature prioritization to our customers.

We’re happy to adjust our priorities to support these folks, so we figure we should state that clearly and make sure it’s something everyone can take advantage of, not just the people who inquire.

So here’s the deal with Sandstorm Solutions: if you want a Sandstorm feature prioritized, or a particular bug fixed, we can do it. The way this works is:

Get an estimate »

Need something different? We’re happy to talk about how we can help you succeed - drop us a line at solutions@sandstorm.io.

Sandstorm Oasis emerging from beta

By Kenton Varda - 17 Nov 2016

Sandstorm is about making it easy to run a personal server. But we also offer Sandstorm Oasis, a service which runs your Sandstorm server for you.

Contradiction?

Actually, no: Even if you run your own server, Oasis benefits you. Oasis is important because it makes it possible for anyone to use Sandstorm’s library of open source apps, even if they really don’t want to run their own server. A larger audience means that more and better apps will become available. Indeed, after we launched Oasis last year, the rate of new apps becoming available on Sandstorm spiked.

That benefits self-hosters, because those same apps can be used on your private server, too.

In fact, we at Sandstorm don’t necessarily think “the cloud” is a bad idea. What we believe is that you should have the freedom to choose what makes sense for you. But that choice is moot if the particular app you need to use is only available in the cloud – we need the same apps to be available everywhere.

Oasis has now been running reliably for over a year. The Sandstorm team uses Oasis every day to get our own work done. I am composing this blog post in Etherpad, while organizing my task list in Wekan, chatting with teammates in Rocket.Chat, and syncing files with Davros.

Here are just some of the things we’ve changed since Oasis was launched:

We’ve so far kept Oasis labeled “beta”, mostly because, as engineers, we always feel like there’s so much more to do. But, that will always be true – no good software project is ever “done”. With Oasis being used for so much real work, the time has come to remove the “beta” label.

Oasis will officially emerge from beta on November 27. We wanted to give advance notice of this change because it affects our paying users: we will no longer be waiving your subscription fee as we have during the beta period. For backers of our Indiegogo campaign who opted for free hosting as a perk, the timer on your service will start now (hey, you got an extra free year!). For the rest, your next monthly invoice will be charged from your credit card. Your subscription payments help support further development of Sandstorm and packaging more apps. Thank you for your support!

Demo Oasis now »
(no sign-in required)

Sandstorm now supports RHEL 7, CentOS 7, Arch, and more

By Kenton Varda - 10 Nov 2016

As of a couple weeks ago – October 23, 2016 – Sandstorm can now be installed on systems with:

This means that Sandstorm can now be installed on Red Hat Enterprise Linux (RHEL) 7, as well as its cousin CentOS 7, both of which use kernel version 3.10.

It also means that Sandstorm can now be installed on Arch Linux, which has historically shipped kernels compiled with CONFIG_USER_NS=n.

So if you previously couldn’t install Sandstorm because you were using one of these distros, now you can!

Install Sandstorm now »

What changed?

For the technically curious…

Sandstorm uses the Linux kernel’s “namespaces” feature as part of setting up the secure sandboxes in which apps run. Normally, creating namespaces requires root privileges, because these features could be used to escalate privileges. However, using “user namespaces”, a process that does not have root privileges can create a special kind of namespace in which other namespaces are (ostensibly) safe to use. Hence, it allows unprivileged processes to create sandboxes.

For security reasons, most of Sandstorm does not run with root privileges. Because of this, it has long relied on user namespaces to allow it to set up sandboxes. At the time the Sandstorm project started, it looked like user namespaces would soon be broadly available across Linux distros, so this seemed like a reasonable strategy.

Unfortunately, this has not been entirely true in practice. The enterprise-oriented RHEL and CentOS distros have long release cycles. Today, they still use kernel version 3.10, which is nearly three years old. Because user namespaces still had many problems in this kernel version, they were disabled by default and are today available only with a boot flag. Meanwhile, some faster-moving distros like Arch have chosen to keep user namespaces disabled even with newer kernel versions due to security concerns: the user namespaces feature has been the source of many local privilege escalation exploits in Linux. Although these vulnerabilities can’t be exploited by Sandstorm apps, such frequent vulnerabilities are problematic for servers which rely on user account separation for security outside of Sandstorm.

Even as it became apparent that Sandstorm’s use of user namespaces was preventing it from being used on some distros, we were hesitant to try other approaches. It seemed like the only way to solve the problem would be to employ a setuid-root binary to set up sandboxes when user namespaces were not available. setuid-root binaries are inherently risky – if not written exactly correctly, it could open its own privilege escalation vulnerability. Also, it would require a major refactoring of Sandstorm internals to move the supervisor into its own binary.

But a couple weeks ago, I realized suddenly that a different idea would work. The Sandstorm server normally starts up as root, but then runs several child processes under a regular user account. Most of Sandstorm’s business logic is in a node.js web server. That process talks via Cap’n Proto RPC to a “back-end” daemon written in C++, which in turn launches app sandboxes. This back-end daemon is hand-coded in C++, with the core logic all living in a single file.

Because of this design, it turned out to be relatively easy to pass superuser privileges down through the back-end, while still keeping them away from the web server. Specifically, the back-end can execute with its effective UID set to a normal user account, but its real UID being root. Then, when it comes time to start a sandbox, it can promote itself back to root to do the work.

This turned out to take only a couple hours to implement. In retrospect, the design seems obvious, and I wish I’d thought of it sooner!

There is a minor downside: If a vulnerability allows an attacker to cause the back-end to execute arbitrary code, that code could claim the superuser privileges, whereas before it would be limited to the Sandstorm server UID. This risk is probably small because the back-end is a relatively simple program that only speaks directly to other trusted programs (although it speaks indirectly to potentially-malicious actors). Nevertheless, if user namespaces are available, then Sandstorm will avoid handing root privileges to the back-end at all, continuing to operate as it did historically.

What do I need to do?

Existing Sandstorm users need not take any action. Your servers will continue to operate exactly as they always have.

But if you’ve been held back from installing Sandstorm before because it wouldn’t work on your distro, you should try again now!

Install Sandstorm Standard »

Try Sandstorm for Work (supports corporate SSO via LDAP/SAML/AD and organization management features) »

Linux kernel CVE-2016-5195 "Dirty COW" mitigated by Sandstorm

By Kenton Varda - 25 Oct 2016

Last week, a Linux kernel bug, CVE-2016-5195, was described as “the most serious Linux local privilege escalation ever”. The bug – which potentially allowed any code running on a Linux machine to escalate its privileges to root – was already being actively exploited in the wild before it was fixed, and had existed in the kernel for many years.

Since Sandstorm allows any user of a server to upload their own apps, you might wonder if this bug could allow a Sandstorm user to compromise the server.

We’re happy to report that the answer appears to be “no”. As is often the case with Linux kernel bugs, our sandbox blocked the exploit.

Of course, we still recommend updating your kernel in case the bug can be exploited in ways that have not been discovered yet.

Technical Details

The bug in question was a race condition in the handling of memory pages mapped copy-on-write. A process can ask that a read-only file be mapped into its memory space in such a way that it is allowed to modify the mapped memory. When the process writes to the memory, the kernel makes a private copy of the affected page, so that the process only modifies its copy, not the original. Meanwhile, a process can request later on that the modifications it made be discarded, returning the page to its original state. In certain circumstances, by both writing to a page and requesting this discard at the same time (in separate threads), the process could end up writing to the original pages that are shared with other processes on the system, instead of its own private copy. Hence, the process could modify any file on the system. By modifying, say, the sudo utility, it could give itself a backdoor which allows it to gain root privileges trivially.

However, not just any old write worked here. In order to trigger the race condition, the process had to write in a way that calls the kernel’s get_user_pages() function with the force parameter set to 1. The force parameter says: “If this page is mapped copy-on-write, then let me write to it (making a private copy) even if the page’s protection mode is read-only.” As it turns out, it is possible for a memory mapping to be both read-only and copy-on-write, and in fact this is the mode that is usually used when mapping in a program’s main binary and shared libraries. Normally, no copy is ever performed, because the writes that would trigger them are not allowed. However, there is a special case where this combination of flags matters: If you are running a program in a debugger, and you ask the debugger to insert a breakpoint, it does so by overwriting the instruction at the given address with a break instruction. That is, it modifies the mapped executable. The force flag actually exists for exactly this purpose: so that the debugger can inject breakpoints into the program being executed by the process being debugged (without affecting any other processes that happen to be running the same program).

Because the force flag is only useful in very specific circumstances, only certain code paths can trigger the vulnerability. Kernel security engineer and Sandstorm contributor Andrew Lutomirski tells us the only entry points appear to be:

As it turns out, none of these code paths can be exploited by Sandstorm apps:

So, as far as we can tell, Sandstorm has never been vulnerable to this bug.

Defense in depth

Even if Sandstorm were vulnerable, the exploit would have far reduced impact inside Sandstorm than in a typical Linux environment, because:

When running on Sandstorm, a user’s data in an app like Etherpad is containerized separately from another user’s data. In fact, we go one step further and containerize each document separately. In the case that Sandstorm had not mitigated the bug outright, it appears the impact of the bug would be that an app could break Sandstorm’s per-document isolation and read/write documents from any number of users, so long as those users all use the same version of the same app on the same server. The app still would not have been able to interfere with other apps. This is the status quo in a typical Linux environment: in most non-Sandstorm environments, an app keeps all users’ data in a single database without per-user isolation. Overall, this is much less significant than a privilege escalation to root. Thankfully, our seccomp mitigation prevented this.

Sandstorm’s Security Record

This is not the first Linux security bug mitigated by Sandstorm. In fact, we’ve kept a long list. Moreover, in addition to mitigating Linux kernel problem, Sandstorm mitigates most vulnerabilities in the apps that run on top of it. Check out the whole list of mitigated vulnerabilities that we’ve compiled: Sandstorm Security Non-Events

Want to try out Sandstorm as a user? Try the online demo »