Author

Topic: Cloudsmash.io - Decentralized VPS Cloud Open To The Public (Read 3447 times)

member
Activity: 98
Merit: 10
Is this still alive?
legendary
Activity: 1302
Merit: 1007
Glad to hear that this project is still going! I thought for a bit that it was vaporware since I had not heard of any updates about it since your previous post.  Do you have any sort of general timeline for the project so as to know when we can expect it to open to the public or have an open-beta, sort of?

The summer was a pretty hectic one. In addition to developing Cloudsmash, I also run a small datacenter called JeffColo.

In 2017, we decided to reorg and expand our datacenter into a larger space. We have had significant delays getting the new facility online. Before the move I suspected there would be problems, so in July I moved all the Oregon Cloudsmash gear down to Hurricane Electric in Fremont, CA.

The migration to Fremont was an excellent test for the resiliency of the mesh. Multiple peering points were dropped, vm's were migrated to alternate sites, hardware was packed up and driven to it's new home. Once back online, we established our new peering and began to migrate vm's back. It all worked perfectly, with zero customer downtime.

The move also allowed us to test using a virtualized equipment stack. This means that we have no physcial routers, firewalls, etc. Our servers connect to simple switches and rely on virtualized routers to establish upstream peering with HE.net. We received a 10 Gbps network drop, so we now get to validate networking performance beyond the gigabit threshold. Recently we have been able to get our virtual router instances to forward at the full 10 Gbps rate despite running on Westmere hardware from 2010.

Quote from: btcton
Do you have any sort of general timeline for the project so as to know when we can expect it to open to the public or have an open-beta, sort of?

This all sounds awesome and I am overly glad that this is alive and well. As always, I will be looking forward to more information regarding the service and how it develops and scales. I still don't see any official channels or any relevant results on Google or any other search engine, but if there are any then don't hesitate to let me know. I would be very interested in following the development of any technologies related to distributed scaling of systems and yours seems to be a major step in that direction.
The previous beta round quota had been filled and I've been accepting feedback for the last several months. There have been small improvements here and there, mainly surrounding network performance and reliability.

Recently we began the process for hiring an architecture engineer. The selection process has been going well and it looks like we may have our first Cloudsmash developer coming onboard soon. I expect to have another consumer beta round in late Spring 2018 and hopefully will do a very limited alpha round for providers in mid-Summer 2018.

Quote from: btcton
I have honestly never thought or seen anything like this before making decentralized VPSs and would love to see how such a technology could fare in a near-production environment. I do not have any specific host services in mind, but I do wish to see what can be accomplished with it.

I had not either and was frustrated that it didn't exist. It's certainly not vaporware, at this time it's a well tested stable platform that desperately needs a good web interface to interact with consumers and providers. Cloudsmash now has 14 pop's and several hypervisor locations, the network has experienced zero downtime since I brought it up in August 2016. It's kind of a living, breathing thing.
member
Activity: 107
Merit: 30
Glad to hear that this project is still going! I thought for a bit that it was vaporware since I had not heard of any updates about it since your previous post.  Do you have any sort of general timeline for the project so as to know when we can expect it to open to the public or have an open-beta, sort of?

The summer was a pretty hectic one. In addition to developing Cloudsmash, I also run a small datacenter called JeffColo.

In 2017, we decided to reorg and expand our datacenter into a larger space. We have had significant delays getting the new facility online. Before the move I suspected there would be problems, so in July I moved all the Oregon Cloudsmash gear down to Hurricane Electric in Fremont, CA.

The migration to Fremont was an excellent test for the resiliency of the mesh. Multiple peering points were dropped, vm's were migrated to alternate sites, hardware was packed up and driven to it's new home. Once back online, we established our new peering and began to migrate vm's back. It all worked perfectly, with zero customer downtime.

The move also allowed us to test using a virtualized equipment stack. This means that we have no physcial routers, firewalls, etc. Our servers connect to simple switches and rely on virtualized routers to establish upstream peering with HE.net. We received a 10 Gbps network drop, so we now get to validate networking performance beyond the gigabit threshold. Recently we have been able to get our virtual router instances to forward at the full 10 Gbps rate despite running on Westmere hardware from 2010.

Quote from: btcton
Do you have any sort of general timeline for the project so as to know when we can expect it to open to the public or have an open-beta, sort of?

The previous beta round quota had been filled and I've been accepting feedback for the last several months. There have been small improvements here and there, mainly surrounding network performance and reliability.

Recently we began the process for hiring an architecture engineer. The selection process has been going well and it looks like we may have our first Cloudsmash developer coming onboard soon. I expect to have another consumer beta round in late Spring 2018 and hopefully will do a very limited alpha round for providers in mid-Summer 2018.

Quote from: btcton
I have honestly never thought or seen anything like this before making decentralized VPSs and would love to see how such a technology could fare in a near-production environment. I do not have any specific host services in mind, but I do wish to see what can be accomplished with it.

I had not either and was frustrated that it didn't exist. It's certainly not vaporware, at this time it's a well tested stable platform that desperately needs a good web interface to interact with consumers and providers. Cloudsmash now has 14 pop's and several hypervisor locations, the network has experienced zero downtime since I brought it up in August 2016. It's kind of a living, breathing thing.
legendary
Activity: 1302
Merit: 1007
Glad to hear that this project is still going! I thought for a bit that it was vaporware since I had not heard of any updates about it since your previous post.  Do you have any sort of general timeline for the project so as to know when we can expect it to open to the public or have an open-beta, sort of? I have honestly never thought or seen anything like this before making decentralized VPSs and would love to see how such a technology could fare in a near-production environment. I do not have any specific host services in mind, but I do wish to see what can be accomplished with it.
member
Activity: 107
Merit: 30
i want 1 VPS at Manchester, UK - Hurricane Electric, M247

i will pay you via BTC

Thanks


We only have a point of presence in Manchester to relay our traffic into our network, hypervisors could be located anywhere.

However that does bring up an interesting point, I'll see if I can start maintaining a list of hypervisor locations.

Currently they are;

  • Fremont, California USA
  • Portland, Oregon USA
  • Berkeley Springs, West Virginia USA
  • Ashburn, Virginia USA
  • Amsterdam, NL
member
Activity: 107
Merit: 30
Is there any ETA (I assume a very rough one considering you guys are still in the beta testing phase) for when you plan to open the project for public consumption? What other cloud providers do you expect to be competing against? Is it the DigitalOcean / AWS Lightsail sort that simply provide cloud services or bigger infrastructure-as-a-service giants such as Google Cloud Platform and Microsoft Azure? I am talking functionality-wise rather than scale-wise, at least to begin with.

On a side note, your website seems to be down still. Is there any official way to follow the progress of this project? Are there any plans on creating a fully fledged website with the information or is it still too early for that?

I'm hoping that we will open to the general public for VPS use sometime in the fall of 2017. It might not be until early in 2018 that we open to the general public for server contributors.

There are some "IaaS" aspects to it in terms of the seamless global networking, automated storage replication and fault tolerance. However it is mainly targeted at unmanaged services. I would expect that people would build managed services on top of it.



That sounds great. The website still seems to be down, however.  Are there any official channels to obtain constant updates on this project? If not, what way could I be kept up to date on what is happening and the potential release dates as well as open services? I definitely do not consider myself knowledgeable enough to know how much work this takes or even the back-end or structure of this shared cloud, which is the main reason why I would like to know potential ETAs and milestones for the project as soon as possible.

Yeah, no website yet. What can I say? I'm a better packet wrangler than graphic designer. If anyone wants to help with a website I am open to suggestions. Currently this project is operated only by myself. I am actively looking to hire someone for help though, anyone interested?

Over the summer I did work on a web app for user interaction and it is mostly working. I just need to tie it into the backend and it should allow people to manage their virtual machines from the comfort of their web browsers. For now I handle all the administration (power on, power off, os install), however once the machine is running I'm basically hands off and the end user has full control of their environment.

member
Activity: 107
Merit: 30
i want 1 VPS at Manchester, UK - Hurricane Electric, M247

i will pay you via BTC

Thanks


If I understand correctly, this project is still in the conceptual design/starting implementation phase still. This means that they are nowhere close to actually delivering a finalized product for the end user/consumer (us). In the last update they said that they have plans for opening the service to the general public in the fall of 2017, but they did warn that it is possible that it will not actually be happening until later next year. Given how I still cannot find a website for them, I believe it is too early to be asking for offers. If there are any updates I am not aware of please let me know.

Actually the core infrastructure all works and has been operating without downtime since August 2016. It's quite beyond the conceptual stage, the real question is scale, how many people and machines can I load up on my network topology before things start getting problematic.

The reason for the beta rounds is to slowly introduce additional people onto the platform to monitor and observe the network and any problems the beta applicant might run into.

There will be several beta rounds for virtual machine consumers and separate beta rounds for virtual machine providers. We are currently in our second round of beta applicants for consumers. If you are interested in getting a virtual machine please let me know by email at [email protected]
member
Activity: 107
Merit: 30
Well good news, it has taken forever but the Hong Kong peering point is online. The only downside is that currently we are only receiving traffic through the pop and are unable to originate traffic from there. I'm sure it will get worked out eventually. The HK peering point is connected directly to the HKIX (Hong Kong Internet Exchange).

Additionally I expect that the Chicago and Buffalo pop's will be coming online sometime this week.

Singapore is still a work in progress.

Good news, the Chicago and Buffalo pop's are online and working. Thanks to Nexeon for providing the hosting for those.

In other news, the Hong Kong and Singapore providers are just awful. I've decided to shutdown those pop's for now, they have created far more problems than they have solved.
legendary
Activity: 1302
Merit: 1007
i want 1 VPS at Manchester, UK - Hurricane Electric, M247

i will pay you via BTC

Thanks


If I understand correctly, this project is still in the conceptual design/starting implementation phase still. This means that they are nowhere close to actually delivering a finalized product for the end user/consumer (us). In the last update they said that they have plans for opening the service to the general public in the fall of 2017, but they did warn that it is possible that it will not actually be happening until later next year. Given how I still cannot find a website for them, I believe it is too early to be asking for offers. If there are any updates I am not aware of please let me know.
full member
Activity: 140
Merit: 100
i want 1 VPS at Manchester, UK - Hurricane Electric, M247

i will pay you via BTC

Thanks
legendary
Activity: 1302
Merit: 1007
Is there any ETA (I assume a very rough one considering you guys are still in the beta testing phase) for when you plan to open the project for public consumption? What other cloud providers do you expect to be competing against? Is it the DigitalOcean / AWS Lightsail sort that simply provide cloud services or bigger infrastructure-as-a-service giants such as Google Cloud Platform and Microsoft Azure? I am talking functionality-wise rather than scale-wise, at least to begin with.

On a side note, your website seems to be down still. Is there any official way to follow the progress of this project? Are there any plans on creating a fully fledged website with the information or is it still too early for that?

I'm hoping that we will open to the general public for VPS use sometime in the fall of 2017. It might not be until early in 2018 that we open to the general public for server contributors.

There are some "IaaS" aspects to it in terms of the seamless global networking, automated storage replication and fault tolerance. However it is mainly targeted at unmanaged services. I would expect that people would build managed services on top of it.



That sounds great. The website still seems to be down, however.  Are there any official channels to obtain constant updates on this project? If not, what way could I be kept up to date on what is happening and the potential release dates as well as open services? I definitely do not consider myself knowledgeable enough to know how much work this takes or even the back-end or structure of this shared cloud, which is the main reason why I would like to know potential ETAs and milestones for the project as soon as possible.
member
Activity: 107
Merit: 30
Is there any ETA (I assume a very rough one considering you guys are still in the beta testing phase) for when you plan to open the project for public consumption? What other cloud providers do you expect to be competing against? Is it the DigitalOcean / AWS Lightsail sort that simply provide cloud services or bigger infrastructure-as-a-service giants such as Google Cloud Platform and Microsoft Azure? I am talking functionality-wise rather than scale-wise, at least to begin with.

On a side note, your website seems to be down still. Is there any official way to follow the progress of this project? Are there any plans on creating a fully fledged website with the information or is it still too early for that?

I'm hoping that we will open to the general public for VPS use sometime in the fall of 2017. It might not be until early in 2018 that we open to the general public for server contributors.

There are some "IaaS" aspects to it in terms of the private mesh networking, automated storage replication and fault tolerance. However it is mainly targeted at unmanaged services. I would expect that people would build managed services on top of it.

Yes, there is a web site coming. Until then I'll be updating the status of the project here.
legendary
Activity: 1302
Merit: 1007
This looks quite interesting, but also technically challenging. While I definitely do not have the knowledge to help with the project as a developer or contributor, I most definitely will be following this as a consumer. Is there any ETA (I assume a very rough one considering you guys are still in the beta testing phase) for when you plan to open the project for public consumption? What other cloud providers do you expect to be competing against? Is it the DigitalOcean / AWS Lightsail sort that simply provide cloud services or bigger infrastructure-as-a-service giants such as Google Cloud Platform and Microsoft Azure? I am talking functionality-wise rather than scale-wise, at least to begin with.

On a side note, your website seems to be down still. Is there any official way to follow the progress of this project? Are there any plans on creating a fully fledged website with the information or is it still too early for that?
member
Activity: 107
Merit: 30
The Portland, OR pop is now online. This is a huge benefit for people who are using virtualization resources that are located in Portland Metro area as gateway access is now <1 ms. It also seems to have fixed some of our issues with getting traffic into Comcast at a decent speed.

Just wanted to give a huge thanks to Telos and Opus Interactive for taking the time to setup peering for Cloudsmash. They had never setup a peering arrangement quite like mine before and were extremely helpful and totally went the extra mile to make it happen.

member
Activity: 107
Merit: 30
Well good news, it has taken forever but the Hong Kong peering point is online. The only downside is that currently we are only receiving traffic through the pop and are unable to originate traffic from there. I'm sure it will get worked out eventually. The HK peering point is connected directly to the HKIX (Hong Kong Internet Exchange).

Additionally I expect that the Chicago and Buffalo pop's will be coming online sometime this week.

Singapore is still a work in progress.


member
Activity: 107
Merit: 30
Looks like I'll be bringing Los Angeles, US and Manchester, UK in full operation by the end of the week. All testing has gone without issue.

Singapore and Hong Kong are taking some time, I question if the provider I chose has ever set up a peering arrangement before, even though they claim they have.

I may have also found a small regional peering point in the Portland, OR area with direct peering into Tata, NWAX, etc. Updates to follow on this.

Chicago, IL and Buffalo, NY are also progressing at a snail's pace.

member
Activity: 107
Merit: 30
Currently active peering points

United States

  • Seattle, WA - NTT, GTT, Equinix Asia, Cogent, Telia, Level 3
  • Dallas, TX - NTT, GTT, Equinix Asia, Cogent, Telia, Level 3
  • Matawan, NJ - NTT, GTT, Equinix Asia, Cogent, Telia, Level 3
  • Miami, FL - NTT, GTT, Equinix Asia, Cogent, Telia, Level 3
  • Charlotte, NC - Cogent, AT&T
  • Los Angeles, CA - Hurricane Electric
  • Chicago, IL - ColoCrossing, Hurricane Electric, GTT
  • Buffalo, NY - ColoCrossing, Hurricane Electric, GTT

Europe

  • Frankfurt, Germany - DE-CIX, Hurricane Electric
  • Manchester, UK - Hurricane Electric, M247

Peering points still in progress

United States

  • None in progress

Asia Pacific

  • None in progress

I'm sure I'll end up adding something in Amsterdam and Tokyo eventually, but for now I think this makes for a pretty sufficient selection of geo-diverse carrier connectivity. All virtual machines on the platform receive traffic from all of our BGP peering points during normal operation. By default outgoing traffic is sent through the highest throughput / lowest latency pop. Platform consumers can define static routes and set specific pop's for any destination address. All sorts of ECMP options become available for virtual machine users using this strategy. It also makes a great resilient network with lots of multi-homing. If and outbound pop becomes unavailable you simply fail-over your outbound traffic to any of our others points of presence.
 
member
Activity: 107
Merit: 30
I've just negotiated deals for two new peering points;

North Carolina, USA (AS174 - Cogent, AS7018 - AT&T, AS8175, A23336, AS8100)

Frankfurt, DE (AS33891 - DE-CIX, AS6939 - Hurricane Electric)

I'm in the process of getting another two up and running, one in Chicago, IL and one in Buffalo, NY.


I've got both the new peering points online for some testing. Assuming my tests go ok and packets don't go globe trotting, I'll bring these nodes online full time sometime within the next few days. So far things are looking good. The Frankfurt peering point is on a 10G connection straight into the DE-CIX and should be fasssst. Having access to HE.net's 100 Gbps trans-atlantic crossing is pretty bad ass.



Testing is going well, I still need to figure out how to register our prefixes with RIPEDB for the Frankfurt peering point. However it is accepting and transmitting traffic, just the majority of the peers on the backbone are filtering our prefixes.

Once I get the RIPEDB stuff done the 10 Gb DE-CIX in Frankfurt will be fully online.
member
Activity: 107
Merit: 30
I've just negotiated deals for two new peering points;

North Carolina, USA (AS174 - Cogent, AS7018 - AT&T, AS8175, A23336, AS8100)

Frankfurt, DE (AS33891 - DE-CIX, AS6939 - Hurricane Electric)

I'm in the process of getting another two up and running, one in Chicago, IL and one in Buffalo, NY.


I've got both the new peering points online for some testing. Assuming my tests go ok and packets don't go globe trotting, I'll bring these nodes online full time sometime within the next few days. So far things are looking good. The Frankfurt peering point is on a 10G connection straight into the DE-CIX and should be fasssst. Having access to HE.net's 100 Gbps trans-atlantic crossing is pretty bad ass.

member
Activity: 107
Merit: 30
Looks like I might have a deal to get a Singapore and Hong Kong peering point as well!
member
Activity: 107
Merit: 30
I've just negotiated deals for two new peering points;

North Carolina, USA (AS174 - Cogent, AS7018 - AT&T, AS8175, A23336, AS8100)

Frankfurt, DE (AS33891 - DE-CIX, AS6939 - Hurricane Electric)

I'm in the process of getting another two up and running, one in Chicago, IL and one in Buffalo, NY.

member
Activity: 107
Merit: 30

I have been using sunbreak's service for over a year. First rate and professional!!

:thumbsUp

Thanks for the shout out.
hero member
Activity: 530
Merit: 500
Count me in on this project! Wink

I am very interested and now this might actually get me to finally get into the cloud now. Grin
Just send me what you have for me to do and I will do it so to be part of it!
Thank you very much. Smiley
Awaiting your application process. Cool
member
Activity: 107
Merit: 30
Anyone have any other questions?
member
Activity: 107
Merit: 30
Will you be doing anything with Intel CAT to block cross-VM CPU cache attacks, especially on the handful of machines in this initial round?

https://www.researchgate.net/profile/Yuval_Yarom/publication/291830462_CATalyst_Defeating_Last-Level_Cache_Side_Channel_Attacks_in_Cloud_Computing/links/56a6b0d408aeded22e3544ff.pdf

a system that uses CAT to protect general purpose software and cryptographic algorithms.

Their approach can be directly applied to protect against a malicious enclave. However, this approach also does not allow to protect enclaves from an outside attacker.

- https://arxiv.org/pdf/1702.08719.pdf

- https://news.ycombinator.com/item?id=13995374

I'm aware of cache side channel attacks and the complications they introduce in a multi-tenant virtualization environment. For now unless you are using a fairly modern CPU that supports Intel's SGX extensions and are running an operating system and/or hypervisor that utilizes them then you are exposed to this type of attack.

Feel free to correct me, especially if you have more detailed information. My understanding is that SGX extensions and features like CAT are only now being tested in mainline Linux kernel releases. I believe CAT support was added in Linux 4.10. The kernel we compile for ourselves is based on the mainline distribution. All effort is made so that features like these will be utilized if your hardware supports it.

If you are a consumer shopping for virtualization resources, this is one of the things you will be able to specify as a criteria.

For example, you could search for providers who were offering virtual machines that specifically exposed AVX, SSE3 and AES-NI instructions.

Searching for a provider that supported SGX and CAT would be yet another CPU feature that could be added to the search criteria.

member
Activity: 107
Merit: 30
2017 Copyright. All Rights Reserved.

The Sponsored Listings displayed above are served automatically by a third party. Neither Parkingcrew nor the domain owner maintain any relationship with the advertisers.

Privacy Policy

The above is the information displayed by your website http://www.cloudsmash.io are you no longer using the domain name or you no longer do this business ?

I just acquired the domain the other day. The name for the service had not been determined until very recently. I took suggestions from people and the 'cloudsmash' name was deemed the best fit. As stated in a previous message, I am still looking for help with a web front end development. Would you like to volunteer?

I think clearly giving information on what the compensation structure is would go a long way. I can't personally help out here, but may help find biters, rather than just tossing out that you need help.

The person renting out their machines would set their own prices and receive the entire rental amount minus a small commission.

The larger the network the smaller the commission percentage will be. The goal is to get the commission down to 2%, it is currently undetermined what percentage we will start at. I can say with confidence that it will definitely not be more than 10% initially, hopefully less.
legendary
Activity: 1988
Merit: 1007
2017 Copyright. All Rights Reserved.

The Sponsored Listings displayed above are served automatically by a third party. Neither Parkingcrew nor the domain owner maintain any relationship with the advertisers.

Privacy Policy

The above is the information displayed by your website http://www.cloudsmash.io are you no longer using the domain name or you no longer do this business ?

I just acquired the domain the other day. The name for the service had not been determined until very recently. I took suggestions from people and the 'cloudsmash' name was deemed the best fit. As stated in a previous message, I am still looking for help with a web front end development. Would you like to volunteer?

I think clearly giving information on what the compensation structure is would go a long way. I can't personally help out here, but may help find biters, rather than just tossing out that you need help.
newbie
Activity: 1
Merit: 0
Will you be doing anything with Intel CAT to block cross-VM CPU cache attacks, especially on the handful of machines in this initial round?

https://www.researchgate.net/profile/Yuval_Yarom/publication/291830462_CATalyst_Defeating_Last-Level_Cache_Side_Channel_Attacks_in_Cloud_Computing/links/56a6b0d408aeded22e3544ff.pdf

a system that uses CAT to protect general purpose software and cryptographic algorithms.

Their approach can be directly applied to protect against a malicious enclave. However, this approach also does not allow to protect enclaves from an outside attacker.

- https://arxiv.org/pdf/1702.08719.pdf

- https://news.ycombinator.com/item?id=13995374
member
Activity: 107
Merit: 30
2017 Copyright. All Rights Reserved.

The Sponsored Listings displayed above are served automatically by a third party. Neither Parkingcrew nor the domain owner maintain any relationship with the advertisers.

Privacy Policy

The above is the information displayed by your website http://www.cloudsmash.io are you no longer using the domain name or you no longer do this business ?

I just acquired the domain the other day. The name for the service had not been determined until very recently. I took suggestions from people and the 'cloudsmash' name was deemed the best fit. As stated in a previous message, I am still looking for help with a web front end development. Would you like to volunteer?
sr. member
Activity: 364
Merit: 250
2017 Copyright. All Rights Reserved.

The Sponsored Listings displayed above are served automatically by a third party. Neither Parkingcrew nor the domain owner maintain any relationship with the advertisers.

Privacy Policy

The above is the information displayed by your website http://www.cloudsmash.io are you no longer using the domain name or you no longer do this business ?
member
Activity: 107
Merit: 30
So far I have received 6 beta applications. I forgot to mention how many I would be accepting, I should be able to do a total of 50 applicants at this time, 44 openings left. It all depends on what resources you actually want, but 44 is a pretty good estimate.

member
Activity: 107
Merit: 30
So far these have been pretty good questions, keep them coming!
member
Activity: 107
Merit: 30
I'm confused on the selling your own resources part. If you do this, do you pick your own prices, or is the system set up with predetermined pricing that you have to go with?

And there's a downside to this type of system: how can you ensure that users hosting someone's VPS won't access the files/MITM it?

In this first beta round we are supplying all of the hardware and the prices are set at very reasonable initial levels. During the second round when people are invited to contribute their own servers then everyone sets their own prices. It all comes down to if you can offer a similar resource for a price competitive enough for someone to be interested in renting it.

In bitcoin terms, this is almost exactly like sites like miningrigrentals.com where mining rigs are listed by price and you select based on their reputation, rental history and mining rig performances. All of those factors dictate what a fair asking price is.

Here's a list to see what I mean;

https://www.miningrigrentals.com/rigs/sha256

To answer your question about the physical security of the system;

- All of the information is encrypted on the disks.
- The encryption keys are not stored in memory or on the disks.
- The operating system only exists in memory and is stateless, reboot and it's gone.
- Fabric is authenticated, a system can be forcefully removed from the fabric and it would be unable to rejoin
- If you reboot it you only end up with drives containing encrypted data.
- If you pulled a drive while it was running you would only end up 1/6 of the data and that would be encrypted to.
- You can't monitor communications because the only traffic in and out of the box is encrypted as well.
- There are forms of memory encryption and compression at play as well, so rebooting another OS to read ram won't help you either.
- The kernel was compiled with minimal hardware support and drivers, no external busses at all (usb, firewire, serial, etc)

There are some additional attack vectors of concern. There is an active effort to address those as well.


Great answers, Smiley. Which leads to another... there's no redundancy, is there? What if, for example, someone's system crashes? HDD failure, other hardware failure, etc. I saw that it would be replicated if someone planned to go offline, but if it were unintended, it would need to be backed up for safety (otherwise killing the purpose of using it for most real-world scenarios, since we'd be running sites and other services), but that would require multiple VPSs to be paid for or something like that...

Just like your typical cloud provider, each server we are providing has been configured to be as reliable as possible with ecc memory, bonded networking, double parity storage, dual power supplies, ups and generator. The additional off-site redundancy is an extra layer of fault tolerance that is above and beyond what most cloud providers offer. Getting that kind of feature transparently relieves the user from having to manually implement it themselves (drbd, rsync, etc).

Without off-site redundancy hardware failure results in your virtual machine going offline. I've had this happen to my own vm's with multiple main stream providers. When the provider resolves the issue your instance is brought back online. Then you could re-evaluate if that provider is meeting your expectations. If not just migrate to another provider seamlessly with no downtime. Migration time would depend on the volume of data being transferred to the alternative provider.

The real potential for instance failure is fairly low, but not impossible. If the provider failed to meet their advertised SLA, then a portion of your rental fee would be refunded.

When a hardware contributor's machine boots it has to download and start the operating system, authenticate, join the fabric and mount the disks with the proper encryption keys. This process currently takes about 10 minutes.

In terms of the additional cost, if you replicate data offsite and reserve capacity for your instance in case of failure. Then you are utilizing double the resources, so it's double the price. You would have the ability to choose both your primary and secondary provider. There is technically no limit to the number of off-site replicates that you can have. You choose your level of fault tolerance and only pay for what you determine to be sufficient.
legendary
Activity: 1988
Merit: 1007
I'm confused on the selling your own resources part. If you do this, do you pick your own prices, or is the system set up with predetermined pricing that you have to go with?

And there's a downside to this type of system: how can you ensure that users hosting someone's VPS won't access the files/MITM it?

In this first beta round we are supplying all of the hardware and the prices are set at very reasonable initial levels. During the second round when people are invited to contribute their own servers then everyone sets their own prices. It all comes down to if you can offer a similar resource for a price competitive enough for someone to be interested in renting it.

In bitcoin terms, this is almost exactly like sites like miningrigrentals.com where mining rigs are listed by price and you select based on their reputation, rental history and mining rig performances. All of those factors dictate what a fair asking price is.

Here's a list to see what I mean;

https://www.miningrigrentals.com/rigs/sha256

To answer your question about the physical security of the system;

- All of the information is encrypted on the disks.
- The encryption keys are not stored in memory or on the disks.
- The operating system only exists in memory and is stateless, reboot and it's gone.
- Fabric is authenticated, a system can be forcefully removed from the fabric and it would be unable to rejoin
- If you reboot it you only end up with drives containing encrypted data.
- If you pulled a drive while it was running you would only end up 1/6 of the data and that would be encrypted to.
- You can't monitor communications because the only traffic in and out of the box is encrypted as well.
- There are forms of memory encryption and compression at play as well, so rebooting another OS to read ram won't help you either.
- The kernel was compiled with minimal hardware support and drivers, no external busses at all (usb, firewire, serial, etc)

There are some additional attack vectors of concern. There is an active effort to address those as well.


Great answers, Smiley. Which leads to another... there's no redundancy, is there? What if, for example, someone's system crashes? HDD failure, other hardware failure, etc. I saw that it would be replicated if someone planned to go offline, but if it were unintended, it would need to be backed up for safety (otherwise killing the purpose of using it for most real-world scenarios, since we'd be running sites and other services), but that would require multiple VPSs to be paid for or something like that...
member
Activity: 107
Merit: 30
I'm confused on the selling your own resources part. If you do this, do you pick your own prices, or is the system set up with predetermined pricing that you have to go with?

And there's a downside to this type of system: how can you ensure that users hosting someone's VPS won't access the files/MITM it?

In this first beta round we are supplying all of the hardware and the prices are set at very reasonable initial levels. During the second round when people are invited to contribute their own servers then everyone sets their own prices. It all comes down to if you can offer a similar resource for a price competitive enough for someone to be interested in renting it.

In bitcoin terms, this is almost exactly like sites like miningrigrentals.com where mining rigs are listed by price and you select based on their reputation, rental history and mining rig performances. All of those factors dictate what a fair asking price is.

Here's a list to see what I mean;

https://www.miningrigrentals.com/rigs/sha256

To answer your question about the physical security of the system;

- All of the information is encrypted on the disks.
- The encryption keys are not stored in memory or on the disks.
- The operating system only exists in memory and is stateless, reboot and it's gone.
- Fabric is authenticated, a system can be forcefully removed from the fabric and it would be unable to rejoin
- If you reboot it you only end up with drives containing encrypted data.
- If you pulled a drive while it was running you would only end up 1/6 of the data and that would be encrypted to.
- You can't monitor communications because the only traffic in and out of the box is encrypted as well.
- There are forms of memory encryption and compression at play as well, so rebooting another OS to read ram won't help you either.
- The kernel was compiled with minimal hardware support and drivers, no external busses at all (usb, firewire, serial, etc)

There are some additional attack vectors of concern. There is an active effort to address those as well.
legendary
Activity: 1988
Merit: 1007
I'm confused on the selling your own resources part. If you do this, do you pick your own prices, or is the system set up with predetermined pricing that you have to go with?

And there's a downside to this type of system: how can you ensure that users hosting someone's VPS won't access the files/MITM it?
member
Activity: 107
Merit: 30

I have been using sunbreak's service for over a year. First rate and professional!!

:thumbsUp

Thanks for the kind words.
member
Activity: 107
Merit: 30
I updated the original post with a revised description of the service based on feedback that I received. I hope it's easier to understand now.

Does anyone have any questions about the new write up?
legendary
Activity: 1876
Merit: 1000

I have been using sunbreak's service for over a year. First rate and professional!!

:thumbsUp
hero member
Activity: 663
Merit: 501
It's interesting, will be keeping up and seeing how this plays out.
member
Activity: 107
Merit: 30
Not sure why you choose to post this here when you haven't announced your service yet.Either way,I like the project.Here are certain things you should work on,

->End to encryption - Cool,which algos ?
->How many network hops before joining the destination ?
->Planning to extend for IPv6 addresses ?
->Data is encrypted,where are the private keys stored ?

I am announcing a new service, I'll continue to update the parent page.

There are paying customers who have been using this platform for over a year. I'm now willing to accept more customers from a more general audience.

To answer your question about hops before joining a destination, it depends where your starting from. I have peering points in Seattle, Dallas, New Jersey and Washinton DC. I'm always on the look out for new peering partners, the goal is to peer in as many places as possible. Each peering point adds a point of redundancy and further distributes traffic, so the more the better. When traffic enters a peering point it is encrypted and the data is forwarded over our SDN fabric. The encryption is end to end from the peering point all the way to the hypervisor hosting your virtual machine. The network is fully peer to peer and self healing, if a peer is unable to communicate directly it will relay off an intermediate.

IPv6 is currently functional, but only unicast out of Seattle at the moment. Enabling Anycast mobility on IPv6 is on the todo list.

I use a technique like TRESOR to keep the king ring master out of main memory. I know it isn't a perfect solution and still susceptible to DMA based attacks. Right now though, it's better than nothing. Kernel hardening is yet another area of continued development.

I specifically designed everything to enable high availability features transparently to a virtual machine. In the event of an outage your machine could be live migrated to the location where you have your block storage replicated. All with zero down time and no interruption of traffic flow and no changes to assigned IP addresses.

legendary
Activity: 994
Merit: 1000
Your project sound quite interesting and i will be happy to see this coming to life and deliver what you have claimed above. However i don't have enough time to give as beta tester.
legendary
Activity: 1750
Merit: 1115
Providing AI/ChatGpt Services - PM!
Not sure why you choose to post this here when you haven't announced your service yet.Either way,I like the project.Here are certain things you should work on,

->End to encryption - Cool,which algos ?
->How many network hops before joining the destination ?
->Planning to extend for IPv6 addresses ?
->Data is encrypted,where are the private keys stored ?
legendary
Activity: 3262
Merit: 3675
Top Crypto Casino

It is interesting)
I leave the application for testing.
member
Activity: 107
Merit: 30
I built a decentralized virtual machine platform in an effort to deliver the cloud that I had envisioned when I first heard the term.

This is an open platform and anyone can participate. Just like any other cloud provider, consumers can buy virtual machines and block storage. On this platform however you can also sell virtual machine instances and block storage as a contributor of server hardware. We act as the Internet service provider and supply the networking glue that makes it possible for a server sitting in your house, garage or datacenter to route publicly accessible IPv4/6 addresses over our encrypted network fabric.

We make money by taking a small commission on sales and by charging for IP transit and address space. We are responsible for building out a global network of peering points and handling IP prefix advertisement for thousands of public and private network fabrics. NOC support and abuse reports are handled no differently than any other ISP and abusive participants can be banned from the fabric, individual IP addresses can be null routed.

Consumers creating new virtual machines can search for providers based on hardware features and historical metrics for reputation, uptime,  cpu, memory, iops, latency and throughput. If a contributor has to take their server offline then all consumer virtual machines and block storage can be live migrated to any server connected to our fabric with no downtime.

Contributors net boot our Linux distribution using a bootable USB key. Upon booting a unique identity is created and registers with our system. Our web administration interface allows you to claim these servers and bind them to your account. Then you determine if you want the server to be part of your own private cloud fabric or if you want anyone to be able to rent your resources on the public cloud fabric. You can also choose to do both, have your own private cloud but also monetize your under utilized servers and rent your excess capacity to the public.

Over the last year a few dozen people have been helping me test this platform during it's development. I've received positive feedback and it's time to invite the public to submit applications for the first phase of our beta round. Core services are production ready and battle tested but subject to a more frequent maintenance cycle. Once we enter the second phase of beta testing we will be accepting applications for server contributors.

You can submit beta applications and other questions to the following;

[email protected]

I'm looking for help with continued development. If you feel you could contribute to this project, please contact me at the address listed above. I plan on accepting applications for full time positions in the near future.

Please comment, I'm looking for feedback.

The goal of this project is to bring the "mining" model to the virtualization space and encourage anyone, including existing cloud providers put servers on our fabric and openly compete in a free market. Running our distribution eliminates all of the configuration and time required to setup a sophisticated cloud infrastructure and significantly lowers the barrier of entry to becoming a cloud provider. Anyone with a good server and fast unlimited internet can boot, register and list their server resources for rent in under 5 minutes. Your only responsibility is to make sure it stays connected and powered on and offer prices that are competitive with similar offerings.

To seed the initial network, we have setup five locations;

Portland, OR
Fremont, CA
Berkeley Springs, WV
Ashburn, VA
Amsterdam, NL

Each location has a variety of servers on dedicated 1 Gbps fiber that can easily achieve gigabit speeds to their peering points. Hypervisors in each location communicate over bonded Ethernet at 20 Gbps. Our Fremont, CA site is at a Hurricane Electric datacenter with a 10 Gbps uplink directly into their global backbone.

These servers represent our initial fabric capacity and I plan to add 2 to 3 more servers in 2 or 3 more locations as the need arrises. The resources total as of right now;

- 752 cpu cores
- 3 tb of memory
- 576 tb of disk
- 6 tb of pci-e nvme ssd

Here are some features that differ from typical services;

- Decentralized - Don't think presence in a dozen locations, think servers in thousands of locations all over the globe.
- Globally Routed - Continually growing our peering relationships and setting up traffic relays all over the world.
- Anycast Enabled - Your IPv4 and IPv6 addresses stay the same regardless of your location in the fabric.
- Self Healing - Fabric will automatically relay through other neighboring nodes to bypass Internet outages.
- Encrypted - Encrypted from the edge routers to the hypervisor, even LAN traffic between servers is encrypted.
- Mobility - Request a live migration to any other server location with zero downtime, same IP.
- Encrypted Storage - All customer data is encrypted at rest, keys are not kept on disk or in memory.
- Snapshots - Take a live snapshot of your disk image and roll back changes to a known state.
- Disaster Recovery - Have your data automatically replicated to one or more other server locations.
- High Availability - Incremental replication enables fast instance migration or restart with large offsite datasets.
- Routing Policies - Choose peering points to send traffic through with custom ECMP policies or keep it automatic.

Here are some features I'm still working on;

- Blockchain Orchestration - Send bitcoin/tokens to an address to create instance, destroy on zero balance.
- Autonomous Hypervisors - Hypervisors that don't allow any login at all, lock out everyone including ourselves.
- Customer Migrations - Customers can initiate a live migration to any other server location.
- Bring Your Own IP - Create private network that utilize the our global network fabric to advertise your own prefix.
- Customer Keys - Customer provided encryption keys for storage or private network communications.
- Public Servers - Allow anyone to contribute capacity to the platform in the form of dedicated baremetal servers.
- Auditing - Open source distribution and configuration for professional and public audit.

Initial pricing during the beta period is;

- $1 / 1 shared vcpu
- $1 / 1 anycast ipv4 address
- $1 / 1 gb of ecc ram
- $1 / 16 gb of pci-e nvm-e ssd
- $1 / 128 gb of double parity fault tolerant disk
- $1 / 250 gb of internet data transfer

For example;

- 1 vcpu ($1) + 1 gb ram ($1) + 16 gb ssd ($1) + ipv6 ($0) + ipv4 nat ($0) = $3/month

- 1 vcpu ($1) + 1 gb ram ($1) + 16 gb ssd ($1) + ipv6 ($0) + ipv4 ($1) + 250 gb transit ($1) = $5/month

- 2 vcpu ($2) + 2 gb ram ($2) + 32 gb ssd ($2) + 128 gb disk ($1) + ipv4 ($1) + 1 tb transfer ($4) = $12/month

As we setup more peering arrangements our bandwidth cost should come down the cost should come down drastically. Only internet ingress and egress count towards data transfer accounting. All internal traffic is unmetered and free of charge, even if the traffic spans different locations. All instances receive public IPv6 addresses free of charge. Instances without a public IPv4 address are given private addresses in the 100.64.0.0/10 CGNAT range and have no data transfer limits, both internally between instances and externally to the Internet.

Pricing for highly available instances depends on the level of redundancy. So if you want your data replicated in to exist in 3 different locations then your price is simply triple the single instance price. If a location suddenly goes offline your instance can be restarted on closest location that has your replicated data. If failure is eminent your instance will be live migrated with no downtime.

Future contributors would probably like to know what kind of hardware requirements to expect;

The the current minimum;

- x86-64 architecture and 8GB of memory
- Internet connection that supports UDP (NAT ok, no public IP required, EasyTether on LTE works!)
- Hardware that supports virtualization extensions
- UNDI capable network card
- Ability to boot from USB
- No external peripherals (usb, firewire, etc)

These are optional, but highly recommended;

- Hardware that supports AES-NI, AVX or AVX2 - Due to all of the encryption it would be pretty slow without them.
- ECC Memory - People debate it, but I sleep better at night knowing it's there.
- High Speed Internet - Try to avoid slow upstream connections. Symmetric gigabit fiber is ideal.
- Redundant Internet - Dual WAN connections can help avoid losing contracts due to Internet downtime.
- Unlimited Internet - Don't get slammed for data overage, pick a provider who won't limit you.
- NVMe PCI-e SSD - Achieve the highest customer density when utilizing high iops, high throughput SSD's.
- 6 disks or more - Additional parity/mirroring configurations will be available in the future.
- LSI2008 - This is what we are using now, so if you want to assured compatibility, use this.
- 10 GbE LAN - More than one server in a single location? It would be advisable to go 10 GbE.
- Dedicated Bypass - Direct ethernet connections between servers will utilize the direct link first.

All pricing is subject to change, I only expect prices to go down. Eventually when we come out of beta, pricing will follow the free market as contributors will be able to set their price and compete with other contributing cloud providers on a level playing field.
Jump to: