Pages:
Author

Topic: Cloudsmash.io - Decentralized VPS Cloud Open To The Public - page 3. (Read 3458 times)

member
Activity: 107
Merit: 30
Not sure why you choose to post this here when you haven't announced your service yet.Either way,I like the project.Here are certain things you should work on,

->End to encryption - Cool,which algos ?
->How many network hops before joining the destination ?
->Planning to extend for IPv6 addresses ?
->Data is encrypted,where are the private keys stored ?

I am announcing a new service, I'll continue to update the parent page.

There are paying customers who have been using this platform for over a year. I'm now willing to accept more customers from a more general audience.

To answer your question about hops before joining a destination, it depends where your starting from. I have peering points in Seattle, Dallas, New Jersey and Washinton DC. I'm always on the look out for new peering partners, the goal is to peer in as many places as possible. Each peering point adds a point of redundancy and further distributes traffic, so the more the better. When traffic enters a peering point it is encrypted and the data is forwarded over our SDN fabric. The encryption is end to end from the peering point all the way to the hypervisor hosting your virtual machine. The network is fully peer to peer and self healing, if a peer is unable to communicate directly it will relay off an intermediate.

IPv6 is currently functional, but only unicast out of Seattle at the moment. Enabling Anycast mobility on IPv6 is on the todo list.

I use a technique like TRESOR to keep the king ring master out of main memory. I know it isn't a perfect solution and still susceptible to DMA based attacks. Right now though, it's better than nothing. Kernel hardening is yet another area of continued development.

I specifically designed everything to enable high availability features transparently to a virtual machine. In the event of an outage your machine could be live migrated to the location where you have your block storage replicated. All with zero down time and no interruption of traffic flow and no changes to assigned IP addresses.

legendary
Activity: 994
Merit: 1000
Your project sound quite interesting and i will be happy to see this coming to life and deliver what you have claimed above. However i don't have enough time to give as beta tester.
legendary
Activity: 1750
Merit: 1115
Providing AI/ChatGpt Services - PM!
Not sure why you choose to post this here when you haven't announced your service yet.Either way,I like the project.Here are certain things you should work on,

->End to encryption - Cool,which algos ?
->How many network hops before joining the destination ?
->Planning to extend for IPv6 addresses ?
->Data is encrypted,where are the private keys stored ?
legendary
Activity: 3262
Merit: 3675
Top Crypto Casino

It is interesting)
I leave the application for testing.
member
Activity: 107
Merit: 30
I built a decentralized virtual machine platform in an effort to deliver the cloud that I had envisioned when I first heard the term.

This is an open platform and anyone can participate. Just like any other cloud provider, consumers can buy virtual machines and block storage. On this platform however you can also sell virtual machine instances and block storage as a contributor of server hardware. We act as the Internet service provider and supply the networking glue that makes it possible for a server sitting in your house, garage or datacenter to route publicly accessible IPv4/6 addresses over our encrypted network fabric.

We make money by taking a small commission on sales and by charging for IP transit and address space. We are responsible for building out a global network of peering points and handling IP prefix advertisement for thousands of public and private network fabrics. NOC support and abuse reports are handled no differently than any other ISP and abusive participants can be banned from the fabric, individual IP addresses can be null routed.

Consumers creating new virtual machines can search for providers based on hardware features and historical metrics for reputation, uptime,  cpu, memory, iops, latency and throughput. If a contributor has to take their server offline then all consumer virtual machines and block storage can be live migrated to any server connected to our fabric with no downtime.

Contributors net boot our Linux distribution using a bootable USB key. Upon booting a unique identity is created and registers with our system. Our web administration interface allows you to claim these servers and bind them to your account. Then you determine if you want the server to be part of your own private cloud fabric or if you want anyone to be able to rent your resources on the public cloud fabric. You can also choose to do both, have your own private cloud but also monetize your under utilized servers and rent your excess capacity to the public.

Over the last year a few dozen people have been helping me test this platform during it's development. I've received positive feedback and it's time to invite the public to submit applications for the first phase of our beta round. Core services are production ready and battle tested but subject to a more frequent maintenance cycle. Once we enter the second phase of beta testing we will be accepting applications for server contributors.

You can submit beta applications and other questions to the following;

[email protected]

I'm looking for help with continued development. If you feel you could contribute to this project, please contact me at the address listed above. I plan on accepting applications for full time positions in the near future.

Please comment, I'm looking for feedback.

The goal of this project is to bring the "mining" model to the virtualization space and encourage anyone, including existing cloud providers put servers on our fabric and openly compete in a free market. Running our distribution eliminates all of the configuration and time required to setup a sophisticated cloud infrastructure and significantly lowers the barrier of entry to becoming a cloud provider. Anyone with a good server and fast unlimited internet can boot, register and list their server resources for rent in under 5 minutes. Your only responsibility is to make sure it stays connected and powered on and offer prices that are competitive with similar offerings.

To seed the initial network, we have setup five locations;

Portland, OR
Fremont, CA
Berkeley Springs, WV
Ashburn, VA
Amsterdam, NL

Each location has a variety of servers on dedicated 1 Gbps fiber that can easily achieve gigabit speeds to their peering points. Hypervisors in each location communicate over bonded Ethernet at 20 Gbps. Our Fremont, CA site is at a Hurricane Electric datacenter with a 10 Gbps uplink directly into their global backbone.

These servers represent our initial fabric capacity and I plan to add 2 to 3 more servers in 2 or 3 more locations as the need arrises. The resources total as of right now;

- 752 cpu cores
- 3 tb of memory
- 576 tb of disk
- 6 tb of pci-e nvme ssd

Here are some features that differ from typical services;

- Decentralized - Don't think presence in a dozen locations, think servers in thousands of locations all over the globe.
- Globally Routed - Continually growing our peering relationships and setting up traffic relays all over the world.
- Anycast Enabled - Your IPv4 and IPv6 addresses stay the same regardless of your location in the fabric.
- Self Healing - Fabric will automatically relay through other neighboring nodes to bypass Internet outages.
- Encrypted - Encrypted from the edge routers to the hypervisor, even LAN traffic between servers is encrypted.
- Mobility - Request a live migration to any other server location with zero downtime, same IP.
- Encrypted Storage - All customer data is encrypted at rest, keys are not kept on disk or in memory.
- Snapshots - Take a live snapshot of your disk image and roll back changes to a known state.
- Disaster Recovery - Have your data automatically replicated to one or more other server locations.
- High Availability - Incremental replication enables fast instance migration or restart with large offsite datasets.
- Routing Policies - Choose peering points to send traffic through with custom ECMP policies or keep it automatic.

Here are some features I'm still working on;

- Blockchain Orchestration - Send bitcoin/tokens to an address to create instance, destroy on zero balance.
- Autonomous Hypervisors - Hypervisors that don't allow any login at all, lock out everyone including ourselves.
- Customer Migrations - Customers can initiate a live migration to any other server location.
- Bring Your Own IP - Create private network that utilize the our global network fabric to advertise your own prefix.
- Customer Keys - Customer provided encryption keys for storage or private network communications.
- Public Servers - Allow anyone to contribute capacity to the platform in the form of dedicated baremetal servers.
- Auditing - Open source distribution and configuration for professional and public audit.

Initial pricing during the beta period is;

- $1 / 1 shared vcpu
- $1 / 1 anycast ipv4 address
- $1 / 1 gb of ecc ram
- $1 / 16 gb of pci-e nvm-e ssd
- $1 / 128 gb of double parity fault tolerant disk
- $1 / 250 gb of internet data transfer

For example;

- 1 vcpu ($1) + 1 gb ram ($1) + 16 gb ssd ($1) + ipv6 ($0) + ipv4 nat ($0) = $3/month

- 1 vcpu ($1) + 1 gb ram ($1) + 16 gb ssd ($1) + ipv6 ($0) + ipv4 ($1) + 250 gb transit ($1) = $5/month

- 2 vcpu ($2) + 2 gb ram ($2) + 32 gb ssd ($2) + 128 gb disk ($1) + ipv4 ($1) + 1 tb transfer ($4) = $12/month

As we setup more peering arrangements our bandwidth cost should come down the cost should come down drastically. Only internet ingress and egress count towards data transfer accounting. All internal traffic is unmetered and free of charge, even if the traffic spans different locations. All instances receive public IPv6 addresses free of charge. Instances without a public IPv4 address are given private addresses in the 100.64.0.0/10 CGNAT range and have no data transfer limits, both internally between instances and externally to the Internet.

Pricing for highly available instances depends on the level of redundancy. So if you want your data replicated in to exist in 3 different locations then your price is simply triple the single instance price. If a location suddenly goes offline your instance can be restarted on closest location that has your replicated data. If failure is eminent your instance will be live migrated with no downtime.

Future contributors would probably like to know what kind of hardware requirements to expect;

The the current minimum;

- x86-64 architecture and 8GB of memory
- Internet connection that supports UDP (NAT ok, no public IP required, EasyTether on LTE works!)
- Hardware that supports virtualization extensions
- UNDI capable network card
- Ability to boot from USB
- No external peripherals (usb, firewire, etc)

These are optional, but highly recommended;

- Hardware that supports AES-NI, AVX or AVX2 - Due to all of the encryption it would be pretty slow without them.
- ECC Memory - People debate it, but I sleep better at night knowing it's there.
- High Speed Internet - Try to avoid slow upstream connections. Symmetric gigabit fiber is ideal.
- Redundant Internet - Dual WAN connections can help avoid losing contracts due to Internet downtime.
- Unlimited Internet - Don't get slammed for data overage, pick a provider who won't limit you.
- NVMe PCI-e SSD - Achieve the highest customer density when utilizing high iops, high throughput SSD's.
- 6 disks or more - Additional parity/mirroring configurations will be available in the future.
- LSI2008 - This is what we are using now, so if you want to assured compatibility, use this.
- 10 GbE LAN - More than one server in a single location? It would be advisable to go 10 GbE.
- Dedicated Bypass - Direct ethernet connections between servers will utilize the direct link first.

All pricing is subject to change, I only expect prices to go down. Eventually when we come out of beta, pricing will follow the free market as contributors will be able to set their price and compete with other contributing cloud providers on a level playing field.
Pages:
Jump to: