stanha

Members
  • Posts

    141
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by stanha

  1. Well, it seems the problem of some files not syncing is resolved in our case and those files that did not sync before on r/o nodes on Linux, now do sync, and the master node (r/w) shows that the r/o node is fully synced. See the following post: http://forum.bittorrent.com/topic/27781-some-files-do-not-sync/?p=79032#post_id_79032 What remains to see if ALL the files do sync when updated on a r/w node.
  2. Finally we got it working. "Thanks God for that".
  3. It seems what is critically needed is the file transfer results tab in GUI. It should show which exact files are attempted to be transferred, preferably the progress bar, and the RESULT of transfer, and, if those results were not successful, what was the exact reason file transfer could not be completed. As it stands right now, we have a number of files that either won't update, or not even being attempted to be transferred or whatever else is the reason for the files not being updated on the client (r/o) nodes when they are modified on the master side (r/w) node. Looking at the logs, especially not even knowing the implications of the meaning of some error messages, of which there seems to be too many, make it practically impossible to find the reason why some files are not being transferred. Also, ALL failed transfers should probably be logged into a different log file, specifically dedicated to errors, where the most detailed information about the error is presented, and not just "error (R/O)" type of messages. Because they are pretty much meaningless, especially if you have write permissions to those files and the folders they are in. That is why in Unix/Linux there is the stdout and stderr files. Errors are not the standard output in terms of log files. The log files could be huge and contain all sorts of not very useful information from the standpoint of errors. That is why you need the debug.err.log or whatever name you find appropriate to name it, to clearly identify the transfer errors with detailed description of the reasons, such, for example, as "file xxx.xxx could not be saved - no write permissions" or "file update time stamp is different in more than xxx seconds" or "torrent is damaged" and so on. Because how can you fix something if you do not know the EXACT reasons for misbehavior? Some, if not most of log file messages simply do not make any sense. Either their exact meaning has to be described in some manual with sufficient detail and precision so the users could understand what they actually and specifically mean, or you can simply forget about the logging. Because the way it is right now, it is next to useless in terms of finding out the exact reason for some problem or transfer failure.
  4. Question: what exactly is the string I need to be looking for in the log file to see which files do not sync and why? Q2: What do I do to make sure ALL the files do sync if some of them do not?
  5. This is actually turning into a disaster level of an issue. Well, "tough".
  6. Some files won't sync. But some say it is user's problem.
  7. Some files won't sync. Configuration: Win 7 master, Linux r/o client. Some files won't propagate the modifications of the master's copy (r/w node) to the r/o client. But if file is renamed to other name, then it will properly propagate to the r/o client machine and will update if master is modified. And even if you rename it to 3rd name, it will be properly renamed on the r/o client side and if it is updated on the master, it will correctly propagate. But if it is renamed back to the original name, then the same exact file will no longer propagate to the r/o client if master is modified. Basically, that original file name is lost to updates. Even if you delete it from master or r/o client and then restore it either when btsync is running or not, and then recreate it again and try to modify it on the master, it won't propagate to r/o client, no matter what you do. It seems that the file name is latched into some dead state and marked as not propagatable and it is no longer updatable. What is the reason for this logic? Is there any way to restore the propagation status of these files and do something to make sure that all other files won't have this problem and propagate? May be there needs to be some reset type of command that will scan all the files and reset whatever bit was set to lock the files, but only for those files that are already on the r/o machine. Or may be force-rescan-update all the files that are out of date or file hash or file size. Otherwise that file is lost as far as propagation of updates and its propagation status can no longer be changed as it is latched. What other ways to make sure all the current files on the r/o machine are up-to-date and will be updatable?
  8. We have a master collection on Win 7 and r/o collection on Ubuntu 13.10. The collection is about 1 Gig in size and includes about 12,000 files. When rescan starts, it takes more than 10 minutes to complete for whatever reason. That implies that if rescan interval is set to default 10 minutes, the disk is simply going crazy non-stop - over 1000 i/o operations/second. That means that our vps vendor is not going to be happy about us eating up the disk i/o resources. Question: why does rescan have to be performed on Linux r/o machine every 10 minutes? I'd prefer a Rescan button instead, if it is impossible to verify the changes otherwise. Secondly, why does it have to be redone again and again on a r/o machine? It seems that the only changes occur when the master updates a file, in which case it simply offers a new version to the r/o machines when some file is updated. Am I missing something here?
  9. Some files are not updated on read-only machine...
  10. Question arises: How can a developer develop an app which he does not control?
  11. Well, the way it looks right now, the very idea of a controllable key by the 3rd party could be easily used by the NWO puppeteers, the NSA, HLS and similar outfits for the evil purposes of controlling and dominating the information flows on the net. Furthermore, at some point, if not already, BT may be totally taken over and/or simply bought out and/or its funding may be withdrawn if they do not follow the instructions of the puppet masters, who are trying really hard currently to block access to "dangerous" information, which they bluntly classify as "terrorism" and so on. If I recall, BT has about 1000 employees. Would be interesting to learn what is their source(s) of income and/or funding to pay this many employees, unless they are working for free. In that context, P2P becomes nothing more than grand illusion, totally controlled by the same powers of evil and your freedom becomes a myth. How can there be freedom if some invisible hand pulls the strings on YOUR servers and controls YOUR clients? Is this a joke of some sort? Also, can anyone tell me if BT Sync communications can be interfered and/or controlled be the 3rd party even if it does not utilize the API? It seems that the tracker may simply not inform any node of hash code for some collection. Can hash key be controlled to block and/or interfere in transfers by the outside party? It seems unless you have some other trackers besides BT's, you have no idea that some nodes can not communicate with each other simply because the BT tracker does not propagate the hashes? How much BT controls the transfers? Is it solvable by merely using the DHT and your own tracker? It seems that since the hash codes pass the BT, from then on it should represent not much problem to simply directly access the nodes by the 3rd party. Because this is a central point of control in this setup, isn't it?