Day 54/365 or day 11/28 for Singing Phoenix

What I accomplished today

  • Wiping old Sapphirian server [90] I'm needing to return the hardware and I've lost my touch with usb operating systems, booting, troubleshooting and such. It's quite frustrating. However I got it done.
  • Prepping OS for new Computer [11] The operating system I require is supported. This was a big worry of mine.
  • Replanned rest of this sprint [5] Done. Not too hard.
  • Beginning to bring in rest of research and transfer all unnessecary research to Idea's Vault [14] # Singing Phoenix As with all my sesssions, I start with singing the song again, this gets me primed, warms up my vocal cords and releases happy chemicals. Today I did 1 levels of vocal match (Level 10). The next level attempt I cound not get, I spent close to 4 minutes just on the pratice. Afterwords I focused on working thru thePitch I: Basic Tuning and Intonation module from 33% up to 56% I'm now better able to compare pitch between two different instruments and tune a virtual guitar which leads to better ability to detect and match pitch.

@Anklebuster Borg cloud contraption

This is a really good idea. If you treat it as a RAID volume you could live off all the "free storage" by utilzing multiple RAID levels.

For example:

Google, One Drive and Some other storage. Imagine Google offers 15 GB for free, One drive offers 10 and Some other storage offers 5.

We could first construct the RAID that combines 2 "disks" into a bigger disk. That is have it so that One Drive and Some other storage are fused.

Then we could either do a massive 30 GB of free storage OR take the Google and the fused One Drive and Some other storage and do raid mirror!

That means every time you create file X and put in folder. It's:

1) Broken into chunks and stored in Google and the Fused storage
2) For each storage fused, they handle the division or allocation of files.

This could allow you to create very complicated storage systems with stacked RAID, preferences of fetching and build in redundancy (on top of their own redundancy!) along with data being stored in geographically different regions!!!! WOW.

Here's what you'd need to implement:

1) RAID Header (we want the raid header(S)) stored in the cloud just so that by logging into one cloud the system can automatically restore the entire CloudRAID configuration (or at least a full "disk"). We'd need to worry about encryption and padding (because we don't want a cloud provider to know how many other clouds we're using with cloud raid as that could be used to aid in do endpoint correlation attack. Furthermore we could introduce a delayed random write or write in fixed 1MB sizes. This would make it harder to track if many people used the softwre.

2) Encryption. Our data should only belong to us. Luckily https://www.cryfs.org solved this problem.

3) How the hell do we deal with figuring out where each data chunk is? Basically do a little bit RAID research and small data structure like a B-Tree that can exist locally (cached) or remotely.

4) How do we deal with FAILURE or cloud inconsistencies? Baically we'll want to contruct something similar to the SATA or possilbe even simple protocol where we can seek(chunkID), pull(chunkID) and push(chunkID). So when we push into a fused file system alot of chunks that fused file system will abstract where those chunks are acually stored. We can make it easier by having a "root chunk" that we always fetch first.


You'll only receive email when Cubes publishes a new post

More from Cubes