Timbo

Members
  • Posts

    127
  • Joined

  • Last visited

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

1,037 profile views

Timbo's Achievements

Advanced Member

Advanced Member (3/3)

  1. With Windows, when I add a file to a sync dir, the sync begins within seconds. I noticed that with Centos 8, files do not sync immediately and require the default 10 minute folder rescan to transfer the files. I do not recall having this problem with Centos 7, so I suspect its a kernel change thing. The number of times I've spent several minutes looking for a file I fully expected to be synced already and downloaded has to be over 20. It's really, really, annoying. Anyone report the same behaviour with other Linux distro's?
  2. I'm not seeing any inotify behaviour for file syncs on Centos 8 running 2.7.2. This must be a new bug, as I don't recall having to wait 10 minutes this often to get files sync'd. I'm not sure if this started with Centos 8 or 2.7.2, but I'd guess 2.7.2 first since Centos 8 came first. Anecdotally, I'd expect for transfers to happen within a minute of the file being saved. On Windows, it doesn't appear to need the 10 minute folder rescan for transfers to start. But seems I do with Centos 8.
  3. You can try restarting Resilio periodically. I just ran into a bug where a transfer starts out at 45MB/s, and then after about a minute it drops down to 5-10MB/s. Restart the receiving Resilio service and get the 45MB/s again and drops back down to 9MB/s... pattern repeats. The saw tooth throughput bug has been reported many, many times before. I might have to start regression testing to see what the last version of Resilio that wasn't this buggy. The difference between 45MB/s and 5-10MB/s was like 23minutes vs 2-8 hours. And these are all mostly large files, I can only see this being much worse with small files.
  4. Resilio should already be sending multiple files at once. However, I don't think 2.7.2 behaviour is the same as it has been before, or in the intended bit torrent protocol. I recall seeing them publish a white paper showing how pushing out a file to like 250 business users happened so much quicker. I have not observed what they claim at all in that the clients do not intelligently split up the files and share different parts to different clients for there to be any observable speed increase. My home connection is 600Mb down, 16Mbps up. When I only synced with one server, I would only get the file sent to me at like 40Mbps. I wanted more senders to max out my 600Mbps download at home. I setup multiple servers in the cloud that all have gigabit (or 10gigabit) up/down connections. I was expecting that having 5 fast servers (in closer geographical area) would sync the file faster amongst themselves and then maybe max out my 600Mbps connection by having 5 senders instead of 1. Instead, I found my home computer constantly maxing out the 16Mbps uplink at my home to share the files with the cloud servers that are closer to each other all with higher bandwidth connections. It ended up being slower, creating bottlenecks and running into bug after bug. I think its one of these things where Resilio employees do not use their own product, or else they'd know that scenario runs into several bugs that ruin any bit torrent efficiency. To be fair, this would take quite a bit of QA resources to setup and automate, but really they should have a test bed for automated QA and I highly doubt they have that. If you're only doing syncing between two peers, you should just use rsync for 10X the speed (especially with the latest rsync that made HUGE improvements), at least to transfer a large amount of files initially. Just let Resilio do the automatic, smaller transfers once everything is in sync. Large amounts of small files put stress on the CPU and the I/O out of the PC. Your point about RTT is kind of irrelevant (the latency itself is more critical than the RTT due to Bandwidth Product Delay). All packets are going to be sent at maximum packet size (ie, 1500 MTU). Doing syncing of large amount of small files over wireless is a poor setup already because of the added latency, shared medium, packet loss and retries associated with wifi and will only exacerbate the problem. But you are right that they do need to be sending files in parallel in order to maximize the TCP connection. For example, run an iperf3 using one tcp session and run iperf3 with 4+ parallel sessions and you'll see the latter result in higher throughput, almost maxing out the connection where the single TCP session does not. The Resilio team seems young and likely could be doing more multithreading to improve performance. I've got tons of horsepower between Threadrippers, NVMe storage and 10GBe networks and Resilio is clearly more for convenience than performance. There are some settings in the advanced power settings that may help performance (see *_workers_* related settings), but I have yet to actually notice any improvement on the few times I tried. Ideally, those settings would dynamically adjust according to CPU cores, available RAM, etc (seeing how this runs on lowly single core 512MB RAM NAS systems to 32 core 64+GB RAM systems), but again, takes lots of QA and testing to know that kind of stuff. Maybe if Resilio gets some investment and development continues, there might be some improvement on the encryption by taking some inspiration from Wireguard or something. I don't know, I'm just guessing that encryption is just one of the bottlenecks that currently limits performance. tl;dr use rsync to do initial transfer on large amounts of small files.
  5. Not really sure what your problem is, you just do the same on C as you did on B... It's fairly basic and straight forward. You start by adding a folder on A. Then you copy the share key (read only or read/write) and use that key on B and C (and D, E, F, etc). Some of my sync's might be between 2 peers only and some between 5.
  6. I imagine they are struggling with a business model where they receive sufficient revenue. I doubt there is currently enough paying Resilio customers to support the dev and support teams.
  7. What exactly was wrong with the two options discussed in that thread? You didn't detail what problems with manual configuration you had. 1. Use a proxy. 2. Configure your peers to use port 80 or 443 as their Resilio listening port. But to answer the question as to why Resilio Support hasn't responded, it is because they cannot advise you to violate the terms of the license you're clearly violating. Resilio isn't for use at business environment (even for personal use! Same as TeamViewer policy). They also don't want to be on IT Admin's sh!tlist for aiding employees violate company policies when Resilio business is their bread and butter. The license has been this way for all of Resilio v2.x, unless you're talking about running Sync 1.x version that is no longer supported anymore. And if you had a business license, then I'd have to ask why you wouldn't just contact support directly. I mean, C'mon man, have some common sense.
  8. So, when you give out incorrect info, get called out on it, this is how you respond? Without knowledge or experience, who are YOU to be telling what others want? They have the laptop, they have battery and processing issues), and they are looking for an ETA for that build. You're offering NOTHING OF VALUE telling them that there is no benefit to a native build. You don't know that. "compiled to target a different instruction set to run with no penalty on the host OS. From the M1 reviews I've read, Rosetta 2 handles this perfectly." You were 0/2 in that. There is ALWAYS a penalty in emulation (just different amounts) and no, Rosetta 2 does not handle this perfectly. There HAD to be major updates for major software for it to even work. There still isn't Adobe programs that will run. I don't know what reviews you've read, because all of them will have caveats. You do know that Apple has been carefully picking and choosing the advertised benchmarks, right? Every new release is a smoke show hiding problems until they get worked out. Look it up, they gaslight you into thinking there is no problem, despite overwhelming evidence to the contrary (see the various class action lawsuits). I'll sum this up in saying, you don't know what you're talking about, are not qualified to tell other people what's what, and what you say will not influence Resilio's decision to build native or not. p.s. newsflash, this is a discussion forum and this is ON TOPIC. If you thought otherwise, you have made the mistake and should show yourself out. You don't handle being told you're wrong very well instead of using it as a learning moment. That's unfortunate.
  9. Oof. Did you not read the very first paragraph you linked to? And then the second paragraph gave an exact example of emulation, allowing a binary written for gameboy to be played on other processors. "In a more technical sense, native code is code written specifically for a certain processor.[1] " That would be x86_x64 native. Arm (the M1 CPU) is not. That first sentence is an over simplification and I would argue it doesn't belong there (nobody talks "native" code when building debs vs rpm's, because they build to architects, the CPU'S). Also, calling bullshit on you saturating your gigabit network. Do a simple file copy over the network and monitor it to see you're consistently above 950Mbps for the transfer. Now do the same with Resilio. Tell me if it's closer to 300Mbps or 600Mbps. So now your argument is that YOU run very low CPU devices but don't sync enough to see peak performance issues. No difference between a phone and desktop? C'mon now, don't be silly. If you used the Android app with any decent amount of files, you'd know how wrong you are. Resilio is nice for the hands off syncing, but there's lots of performance improvements to be made. They seem to have a lot of platform support but no QA, which means testing and optimization doesn't happen (have you tried running on 512MB RAM NAS?). Desktop developers are not the same as embedded developers that operate on smaller resources and operate more efficiently. There's likely various buffer settings and what not to tune, etc. Anyway, in the future, I'll likely use rsync for initial sync as Resilio just wasn't transferring near what it should have been. Rsync recently added improved compression and a noticeable speed improvement.
  10. Check the date that the conflict file wasn't made before you edited the Ignore file.
  11. What "setting" are you referring to? (There is no "setting" to change, only "current time"). The question is, is your mac showing the right time within 1 minute? Compare it to: https://www.timeanddate.com/worldclock/ I know for sure if the time is off by 11 minutes, I'll get that error. If its 1-2 minutes, not sure. Under 1 minute, you won't get that error. Most times, your computer will sync with a server on the internet and get you within a second or two. But if you just booted up a machine that hasn't been on for weeks or months, the clock will be wrong until it syncs. tl;dr 9/10, your computer's time is not accurate enough and needs to sync with an NTP server.
  12. Yes, you're changing a "DEFAULT" setting. This is expected behaviour. However, what is NOT expected, is that when using "folder_defaults.known_hosts", you then lose the ability to add or remove additional predefined IP's per folder. That might be worse than the benefit of the feature itself. Right now, I got around that by having one side not using folder_defaults.known_hosts, but it'll be a friggin hassle to setup two servers that were setup with this setting and later on need to add more. It would require removing the folder, editing the config file, restarting resilio and then adding the folder back. Ain't nobody got time for that. I'd have to assume this is a bug, or else it would be a bad idea to limit this feature this way. The key word is "default", not "limit" or "locked".
  13. "The iOS build supports the M1 natively." That's not what "natively" means. You're using it wrong. Don't. At best, I think you meant "compatible". As you state later, it's using emulation. "literally the toughest thing Sync does from a CPU perspective is hash files, which even the slowest current gen CPUs handle easily." WHAT? Each CPU generation is faster for encryption (they literally move things that were in done in software in previous generations and turn it into hardware that runs much faster) and makes a big improvement over each generation. Your statement either needs a bunch of modifiers or its simply silly. Improving what would be the biggest processing obstacle for the CPU is a gain. They're also encrypting files (lan_encrypt_data=true is default setting). But I'm arguing about CPU hardware. I have no idea about what M1 does implement in hardware over previous Intel chips, but they sure have harped on about how they have their security engines on chip, and so on. "The limiting factor in Resilio Sync workload performance will almost always be network speed, not CPU execution." That has NEVER been my experience and I'm curious why you'd even think that unless you only operated over very slow links. I've NEVER been able to max out network connections (I have 10Gbe at home with NVMe drives, big raids, running on AMD 2920X Threadrippers, 32GB RAMDisks, etc) and there is clear resource optimizations they need to do to improve high speed syncing (ie, interrupt handling, not running their terrible debug logging, etc). I'm like gobsmacked by your statement, as there are many, many people on the forums who have asked over the years why the sync speed is no where near their network limits. CPU and especially Disk I/O are bottlenecked WAY, WAY, WAY before network speeds are.
  14. Can you run `top` on the seedbox and determine if there is high CPU? Do you know how fast the disk I/O is? Lastly, do you have a public IP from your seedbox? If so, then you can skip the relay and tracker and use predefined hosts.
  15. Two way syncing .git folders sounds like a bad idea and wouldn't be a typical use case to be tested, to answer OP's original question. Also, you replied to a 2 year old thread and asked a redundant question (he solved it by reinstalling 2.5.13). Start your own help thread.