I Wrote A Decentralized Browser On Top Of Btsync, I Call It Syncnet. Check It Out!


sirphilip

Recommended Posts

Hey everyone,

 

I recently wrote a decentralized web browser on top of btsync. BTSync handles all the file distribution, so to load a page you just have to enter the secret. I am working on integrating namecoins which will handle the name resolution so you do not have to memorize a secret.

 

You can read more on my website:

http://jack.minardi.org/software/syncnet-a-decentralized-web-browser/

 

And you can follow along with the development on github:

https://github.com/jminardi/syncnet

 

TorrentFreak also recently wrote about this project:

http://torrentfreak.com/bittorrent-sync-used-to-create-decentralized-web-browser-140204/

 

Let me know if you have any ideas or questions! I'd love some feedback from some actual btsync users.

Link to comment
Share on other sites

This is very cool. Practically making every person a web server.

 

This could be an interesting way of sharing very large files, etc.

 

More interesting would be to build a search engine that allows one to find the secret (or color coin) given a query.

 

What are some use cases you think people will use your browser for?

Link to comment
Share on other sites

 

 

This is a really good idea for a NEXT GENERATION WEBBROWSER APPLICATION that uses DHT instead of DNS protocol.

 

The application will have to manage the caching of pages with selective sync, and that is far from impossible.

 

Q: What if it's a very large site. And you are not interested in watching everything on it? 

 

The same application can choose to view classic webserver pages without further ado and thus cover both types of usage, and at the same time avoid putting mental load on the user.

We all know when a server is down and google chrome prompts: "Sorry the page is unavailable. Do you want to view a cached one?"

 

Next, the problem regarding domain names is solveable as a DHT: If just a few DNS servers are set up as bittorrents (using bitcoin for authenticity), a query to bittorrents would be suitable for the DNS lookup. www.my.ultra.fancy.site.here would be DNS'ed as a torrent-file, so when you "search" from the application, you actually put a search for a bittorrent, and the torrent returns the read-only secret. So even watching updates on a large site (google) is easy if you can tell the service that you want to subscribe to a particular subject.

 

Q:  an intermediary search engine has to be able to catalog all files

 

A: No, George Rzevski invented the broadcast-based search engine back in the 70'ies, which was way ahead of its time. It used a protocol where the local application broadcasted a keywords to peers on a scalefree network based on a "guess" of who would know. These then either returned their guess or echoed the query to those who they believed might know.

Though the pagerank algorithm was not yet invented, Rzevski's model used an 8 bit "confidence" level, to provide a response. 255 was the same as "I don't know and don't know anyone who knows". Based on Jure Leskovec's research from 2008-2012 (Stanford) we now know that you can touch every application on the internet in this manner in 32 steps if we know just 2 devices.

 

The trick is that the analysis of the query is distributed, whereby it is as if every device in the world had a keyword and a peer list. So when a query arrives, to a device it asks itself can I help by either providing an answer to the query or by echoing the query to a list of people I know? Bittorrent research that states that 8 peers is enough, provides a reasonable heuristic.

 

When the broadcasting device is done "asking" and has assembled all responses, it will be capable to answer and point others to the results. So while you wait for your peers to respond and you are browsing through the search result pages < 1, 2, 3, ... next > you indicate search depth, looking for lower ranking results and extending search horizon. Hereby you "broadcast" to your peers:

"Sorry mate, the last result was nice, but not what I wanted. Could you please ask your peers for alternatives?"

They will then take the next step down the confidence rank and provide answers.

 

(George Rzevski originally invented this for data-mining material that in the 70'ies could not fit onto a single data tape. So his solution was very pre-hadoop, but pretty much did the same if all servers were known. The problem however was that tapes wore out, cracked, etc. so from time to time there would be down-time. To overcome this, he added the confidence level of "who should know" and if all pointed to a machine that was not available it could provide a maintenance message. -But that is another story...)

 

Let's re-invent the internet browser :-)

Link to comment
Share on other sites

Super cool, and really want to try it out.  Unfortunately, can't get it working.  I run python syncnet.py and the GUI comes up; I'm assuming I have the enaml dependency resolved correctly.  I have python-btsync, wasn't sure where syncnet needs this so I just symlinked it into the same directory as syncnet.py.  I have btsync running with default un/pass/port and have included my api key into the config.json file (does this need to run in the same directory as syncnet?).  Btsync is running and the api works as expected.

 

When I enter "sync://B4KWMK3VBJSH35YZMS7ZEMSQ6XNVBHALY", nothing happens, and I don't see anything in stdout.  Any ideas as to what I could be doing wrong?  I'm on 64 bit linux if that helps.

Link to comment
Share on other sites

  • 2 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.