Automatic Coding

Members
  • Posts

    218
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by Automatic Coding

  1. Yeah, I'm aware the limitations of wifi. Which is why I would want the sync to go through the other network. Here's everything about the innbox (ignore the wifi specs, I don't use the built-in wifi)

    http://www.innbox.ne...atasheet_en.pdf

    For starters, I'm confused what the point of the singular 1Gbit ethernet cable is for if there's only one (E.G. limited by the one you're connecting it through)

    Second of all, I'd check to see if any programs are running of the router (I had an old netgear box that was stuck at 100% CPU and routing got really slow)

    Third of all, make sure that your 100Mbit ethernet sockets are set to 100Mbit, my router came with them all set to 10Mbit by default (Including the Gbit ones, which, was weird).

    Four, assuming that it supports backing up files through a default configuration (So you can revert back to your correct configuration), I'd try resetting to defaults, the option looks like this ("keep user configuration"), although, I have no idea if you have said option:-

    mht9Mkd.png

    Other than that, I'm out. I'm not very good with it comes to testing hardware. If you router supports a bandwidth test, I'd also run it from the router's point of view (To see if it's an issue from windows --> Router or router --> ubuntu), although, not many routers support this. In my case it's under "Tools" and "Bandwidth test server":-

    KHsq1P9.png

    ylAIdFE.png

    Although, you might be able to pull it off with something like TCPSpray.

    Other than that, I'm flat out of ideas, I'm terrible when it comes to diagnosis.

  2. First of all, can we have a debug log? P.S. I'm not sure if this log contains personal information/secrets, so, I'd skim through it first.

    Second, is it just one machine that can't see the other? Or both that can't see either? I had an issue where one device could see the other, but, the other couldn't see the first, I fixed it by port forwarding the other (even though the other was port forwarded).

  3. Well, when I was installing plex server I was looking for a way to watch movies directly on that machine. Like xbmc media center. But I couldn't find plex media center for Linux, just the server. Maybe I missed it, but can you even watch movies directly, without opening the browser on the server machine (which is plugged through HMDI into an LCD tv) if you have plex server installed?

    Also, the second question is the language support. I'm not the only one who uses that machine for watching TV, so the interface has to be in my mother language, because the others aren't fluent in English like I am. Going to check out the rest.

    Also I love how plex auto sorts files into libraries. I wish it was that easy on xbmc.

    Plex media client exists on windows and mac, and, allows you to run without anything like a browser.

    Plex home theater exists on windows, mac and linux, and, is a beta version of an upgraded Plex media client.

    I currently have two HTPCs that use Plex home theater on ubuntu, they work great.

    Network speed Ubuntu side:- Oh, I did it wrong, so here's it again:


    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------
    [ 4] local xx.103.18.xx+ port 5001 connected with 89.212.210.128 port 57973
    [ ID] Interval Transfer Bandwidth
    [ 4] 0.0-19.7 sec 640 KBytes 266 Kbits/sec

    *I edited out my server IP

    Network speed Windows side:


    ------------------------------------------------------------
    Client connecting to xx.103.18.xx, TCP port 5001
    TCP window size: 64.0 KByte (default)
    ------------------------------------------------------------
    [ 3] local [b]192.168.64.102[/b] port 57973 connected with xx.103.18.xx port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0-17.6 sec 640 KBytes 298 Kbits/sec

    *I edited out my server IP

    Seems like there IS some speed issue.

    Also. My server has two network cards. One is eth the second is a wifi one. The first one is directly connected to ISP's INNBOX, there's no (wifi) router in between (because it doesn't support multicast which is needed for IP TV). My computer is also connected directly to that INNBOX by ethernet. And it also uses WIFI network (it's a laptop) when I need it.

    Another thing is, that I don't understand. My PC detect itself as 192.168.64.102 which is odd. Why would the INNBOX give my computer a local IP and not to rest of the machines like IPTV or even the server (which has only one local IP from the WIFI (which is 192.168.1.103)

    EDIT:

    if I connect my PC to server through a router ( PC_WIFI -> router - >server_WIFI)

    the iperf is

    [ 4] local 192.168.1.103 port 5001 connected with 192.168.1.101 port 62151

    [ ID] Interval Transfer Bandwidth

    [ 4] 0.0-10.3 sec 8.38 MBytes 6.81 Mbits/sec

    For starters, yes, there obviously is a internet issue. Now, I'm confused what you mean by "INNBOX"? Can you give me the model number (or a link to the manufacturer's site) of this "INNBOX"? I do admit I'm not very advanced in networking, but, I've never heard of a piece of hardware called an "INNBOX".

    Another thing is, with WIFI, you're limited based on what mode you're on. Once again, I'm not very good at WIFI but I believe it's something like (depending on the mode) 5Mbps/11Mbps/54Mbps.

  4. How this would be different from regalar OS commands?

    I just assumed that the API would be for remote modification of shares without having the full files downloaded (E.G. for my service to modify your shares without having your shares, so, I can't run my MD5 hashing software on a file I don't have), if I'm incorrect, then, it'd help to know how the API would be used (Which, I asked before but apparently you don't know yourself).

    EDIT:- Unless you're talking about launch arguments, which, I wouldn't really call an API. I'd simply call it a command-line interface.

    EDIT2:- Example:-

    Youtube API provides the length of the video, if I were to download said video I'd know the length, but, at that point I'd stop calling it an API and start simply calling it a downloader & metadata reader.

    EDIT3:- Or maybe you mean plugins for BTSync? I can understand why I wouldn't need the commands if it was simply a plugin, something that's run along side BTSync.

  5. Few more:-

    A. Get file modification, access and creation date (Assuming that the filesystem supports this)

    B. Get size (Real size, not size on disk as that'd change through nodes)

    C. Get hash of file (MD5/SHA1/SHA2/CRC/etc...)

    D. Get hash of certain 4MB parts of files (That bittorrentsync uses)

    E. Access files from inside rar/7zip/tar/gzip/zip archives and treat them as if they were any other file.

    However, I'm starting to think what real use an API would give, and, I can't think of much.

  6. The current version hashes 4 MB blocks of the file. If the file length doesn't change, only the changed 4 MB blocks are updated. If the file length changes, the entire file is resync'd.

    Search these forums. There are plenty more details about that available.

    In fact, I have a question for that. What happens if I add like 8MB to the end of the file, so no bytes are shifted over. Would it still resync the whole file or just the last 8MB that updated?

  7. If you have a TTM date for the API I would like to know it asap. My humble suggest is that it should be available in a cross-platform way, preferably in C++. If possible with java and dotnet extensions. But I think the best approach would be to have a javascript oriented API also. So, in combination with Socket.IO, Node.JS and Dynamic DNS we can build a fully operational clould shared storage service using our won computers and a browser!!!! Really cool!

    Or, you know, just release the API protocol specifications and then every programming language is supported? Excluding javascript because of the same origin policy (https://en.wikipedia.org/wiki/Same_origin_policy), however, you could use PHP & javascript to overcome that limit.

  8. Another few things:-

    A. Ability to do basic tests on files without having to download them all (have the nodes in the system who do have it run the commands). Stuff like egrep/head/tail/sed/tr/wc, just all the basic linux I/O commands.

    B. Ability to edit small amounts of the file without ever having the file, say I know I want to edit bytes 10,000 through 12,500 with "DO A BARREL ROLL!" 147.058823529 times, then, I don't really want to download the full file (Which may be insanely large). You'd know where to edit using the above linux I/O commands.

    C. Ability to select "Only upload X many copies" and let the other nodes deal with sharing it within the network. Useful for third party companies who want to preserve bandwidth, once they upload one copy, they no longer want to seed it for their customers, the customer's PCs can then seed between themselves.

  9. 1. Dedicates API keys that you can generate unlimited of & give certain permissions. For example, if site A wants to use my share, but, I'm not sure if I trust them, they say they just want to dump log files of my VPS onto it, I can select "Only allow writing of files, not reading".

    2. Be able to retract API keys, say I cancel my VPS on the above site, I no longer want them to able to write to my share. I want to removed said API key from being used

    3. Obviously, ability to read files, write files, make folders, delete folders, delete files, move files, rename files, all of the standard filesystem commands. Not sure if it's possible, but, maybe make symbolic links and the rest of the more 'advanced' filesystem options? Not sure how well that'll work though.

    4. Limit IP (ranges) to API keys, if the above VPS site gets hacked, I want only their servers to be able to do stuff.

    Although, I have a question, how will the API work? Will it be like a web-API where you connect to one of your servers and then that deals with asking the nodes? Or will you just contact any node in the system and ask them to talk to the other nodes? Or will you pretend to be a node? I've never dealt (or heard of) any P2P APIs, so, I'm having an issue trying to understand how it'll work. If you provide this, I'll probably be able to think up a few more, I'm just confused in which "in point" it'll come in from.

  10. Moin,

    Do not, do not, DO NOT use any sort of external service to create 'random' keys for you.

    a) It's simply stupid and quite against the concept of private/secret keys to let others have anything to do with them. I especially like (in a sarcastic, face-palming sort of way) sites like https://www.grc.com/passwords.htm that jabber on about how they are secure and safe and using SSL and whatnot and yet constitute a perfect example of an implemented attack.

    B) It's completely, utterly and in every other way unnecessary. Every modern operating system has a facility for random number generation from hardware sources and has had this for years. This incorporates sources such as keyboard/mouse events, hard disk/network interrupts and others. Some modern CPUs (and even some less modern chipsets) also have dedicated hardware random number generators which generally will automatically be picked up and incorporated into the OS random pool. Bittorrent sync already used the right buzzwords by telling us they retrieve their entropy from dev/random under Linux/MacOS and CryptoAPI under Windows.

    --

    Henryk Plötz

    Grüße aus Berlin

    Personally, I use lastpass and let it generate passwords for me.

    Although I do concur it means that someone can brute force one of my passwords and gain everything, and if lastpass were to go rough then I'd be screwed, but, I'm too stupid to remember 80 different passwords, so, if I didn't do this then if you brute forced one of my passwords anywhere then you'd have all of my passwords. I feel as though this is a better solution.

    FYI:- I used /dev/urandom and tr to generate my secret key, although, I have read somewhere that urandom isn't meant to be secure/what not.

  11. encfs allows you to easily transfer the already-encrypted files. When you look at the unmounted encfs store, it's just a bunch of oddly named files. (Both the filenames and content are encrypted.)

    So you set your encfs store to be the shared folder with btsync. You work on your files in the mounted (decrypted) encfs mountpoint, and every time you update a file, its encrypted source will be sync'd.

    With TrueCrypt, I think it would still work just fine, but using the container locally on your computer. You can make the TrueCrypt container a fixed size (sparse files for the win! The "dynamic" option in TrueCrypt for Windows), so only the 4 MB chunks that are changed within the TrueCrypt container are resync'd with btsync. Just don't defragment inside your TrueCrypt mountpoint, haha! :rolleyes:

    That all seems like way to much work, mounting and updating and umounting* and remounting. I'm just going to keep my files on a internal "my servers only" share.

    *I believe bitsync only syncs files that aren't in use, either bitsync, plex or both. I can't remember.

  12. (I do have plexapp. I have both xbmc and plex on that server (plex for streaming shows to my tablet and phone, xbmc because it's easier to manage when watching things on the LCD TV on which my server is connected to HDMI). + sickbeard does seem interesting, my only fear is it's going to mess up my auto subtitles finder.)

    I need bittorrent sync for other things that I use my server for.

    So here's the info:

    iperf -s (Ubuntu's side)


    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------

    Network speed (Windows side)



    connect failed: Connection refused

    Write speed (Ubuntu side)

    475914240 bytes (476 MB) copied, 18.0423 s, 26.4 MB/s

    Write speed windows (didn't check)

    (my internet speed is 50/50 mbit/s)

    First of all, may I ask what you like about XBMC that plex doesn't have?

    Second of all, XBMC (I believe) has a plex addon that allows you to use your plex server as a library.

    Third of all, I have sickbeard with Periscope (Subtitle downloader) working fine. I had to modify a few configuration files of sickbeard to get it to run as an extra script, but, it all works.

    Anyway, as for the stats:-

    Network speed Ubuntu side:- Invalid data, you need the windows PC to connect to do the test.

    Network speed Windows side:- Are you sure you're connecting to the right IP? Can you ping said IP? Is Iperf running on Ubuntu when you execute it on windows?

    Write speed (Ubuntu):- Not great, but more than enough for bitsync. This more than likely isn't the issue.

    Read speed (Windows):- N/A

    Internet speed is 50/50:- I assume this is your down/up link to your ISP? If so, useless data. All data is transmitted inside your closed LAN network (I assume this is a LAN network? Else you need to port forward port 5001 on the NAT device that the Ubuntu device is on for the iperf test) , no data ever leaves your LAN area (well, it does, but, it doesn't throttle your down speed).

  13. Linux version:

    Add the IP addresses selection of admin web page hosting.

    It's not safe to publish this page on all IP addresses.

    This can be done via IPTables, any half-decent router (Even for internal requests) or the sync.conf (Although, a lot less advanced).

  14. @eseelke, Wow those are some pretty good deals for storage VPS.

    As for encryption, you could always implement on the client side with encFS or TrueCrypt for now. Or setup an encrypted partition on the VPS - though manually entering the credentials after boot might be a pain.

    Not really, since the file will have to be open for you to write to it and thus any rouge VPS host could easily access your files.

    The only way it'd work is if you put it in a true crypt volume pre-sending it.

  15. I am currently stepping back all my personal information from my work machine and only using apps on protableapps enabled flashdrive, only I still am using dropbox. I love the idea of the bittorrent sync solution. but I would like to see it able to sync my flash drive with my other computers. This whole solution solves two problems. if I never walk back into work again there is nothing to remove from the work computer and if I loose my flash drive I have lost no data.

    What I see so far is great (have not used it yet) just wanted to add another idea to the wish list.

    You can select the folder to be an external device? There's no issue with that one, the only issue is that you can't state two folders to sync the same data on one computer (without the use of something like sandboxie), E.G. internal and external.

    Also, I wouldn't recommend dumping files on a flash drive, they have limited amounts of write space and (from my experience) wear down fast.

    EDIT:- Just thought I'd state, I've only used the linux version. Yet to touch the windows version because:-

    A. I only run a single 128GB SSD on my windows box, I don't really have much to sync, if you get my drift.

    B. My windows box is connected to my linux box which has 15TB worth of space (Which is running BTSync)

    In the windows version (Due to how it mounts), it very well might not work. I have no idea.

  16. We plan to do this, but not there yet. Consider this case, you share a file with a friend, we won't be able to give the ownership user name to the file, even if both platforms are Linux.

    I do see your issue, possibly storing the data else where in a 'meta-data' part of bit sync? Not part of the file itself?

    That's my best bet, I'm not too good with ideas... or linux.

    To get over that limiation, a simple way to do that would be a feature that enables to set the desired user/group for a share in the configuration file ?

    This would also enable the ability to sync different folder for different users and keeping some sort of acl ...

    This also would be an acceptable work-around, I'm currently having to run BTSync as some obscure user who has a really weird configuration (To allow it access to my local files, but, also allow it's files to be read by anyone but only executed and written to by others/etc) on my laptop so that all users can access the files.

  17. I still wouldn't sign up to any 3rd party (VPS/Dedicated servers/'BTSync servers') until encrypted nodes are implemented, if encrypted nodes are implemented.

    Just my point of view. Anyway:-

    This VPS product is only allowed to run programs intended to store or assist in the backup of Subscriber's data. Anyone found running programs not intended to store or assist in backup will be suspended and asked to cease, if they fail to, termination will follow.

    Seems insanely strict, assuming that you're a paying customer.

  18. If you encrypt the transferred data - and if i understood it correctly BitTorrent Sync is encrypting the data befor it is being transferred - it still might be difficult to get any usefull data out of the stream.

    Although, apparently the encryption private key is a fork of the secret, so, if you can access the configuration files and read them (Or inject into memory and read the process's data, possibly, depending on how it stores the data), then you instantly can once again read the transmitted data.

  19. Basically, I have a file that I want to backup daily, it's 8GB on the dot and will never change (and if it does, then I'm happy to wait a bit of extra time because for said file to change requires a hell of a reason and won't happen often), I want to back it up once a day for at-least 7 days before starting to overwrite it, so, I have a command that backs it up to:-

    When backup is made:-

    myTotallyAwesomeBackup1.

    After day one it's renamed to:-

    myTotallyAwesomeBackup2

    After day two it's renamed to:-

    myTotallyAwesomeBackup3

    [...]

    After day six it's renamed to:-

    myTotallyAwesomeBackup7

    After day seven it's deleted.

    However, I have a few questions about this and ease-of-transport. The internet of the back-up machine isn't the best (Under 1MB/s) so I'd prefer not to be pulling 8GB files down per day. So, my first idea was to add a compression part to the script, however, once I attempted to add it in I noticed how different the file is compared to how different the original file is. I'm not sure if it's a simple data "Shift" or if the whole compression really is different (never looked into how compression works too be honest). As shown below is two files from two different days before/after syncing with their size and changed bytes:-

    Unencrypted:-

    7.5G CompressTestOne
    7.5G CompressTestTwo
    40199404 bytes (38MB)

    Encrypted (Only tested up until test two EOF'd, the excess data was excluded from tests):-

    779M CompressTestOne.gzip
    748M CompressTestTwo.gzip
    781130671 bytes (744 MB)

    Basically, my question is, is there anyway to compress a file without changing the whole file when it changes daily?

    Although, once bittorrentsync supports compression during transmission then this is all pointless.

  20. Work around (Although not perfect) is to add this to a crontab for once a day:-

    find "$share/.SyncTrash/" -mtime +$days -delete
    find "$share/.SyncTrash/" -empty -delete

    Although, I do agree it's not the best way of doing it. Personally, I rarely delete stuff so I've yet to set it up for BTSync, however, I do use the above script to delete files I download off RSS feeds on usenet indexers and it works fine.