Search the Community

Showing results for tags 'docker'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Resilio Sync
    • Sync General Discussion
    • Sync Troubleshooting
    • Sync for NAS (Network Attached Storage)
    • Sync Stories
    • Developers
    • Feature Requests

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 13 results

  1. Hello! i'm trying to setup resillio sync on my home lab using microk8s and setting the storage to a samba server i have running (windows) it starts up, i can create my login and except the eula, but if i add any shares i get a database error. if i do NOT use the samba share, and instead i just use local storage, it seems to work fine. any ideas on why the samba share wont work? seems to be specific to the sql-lite dabase (i'm guessing) kind: Namespace apiVersion: v1 metadata: name: home --- kind: Deployment apiVersion: apps/v1 metadata: labels: app: sync name: sync namespace: home spec: replicas: 1 selector: matchLabels: app: sync strategy: type: Recreate template: metadata: labels: app: sync spec: containers: - image: resilio/sync name: sync volumeMounts: - name: sync-share mountPath: /mnt/sync volumes: - name: sync-share flexVolume: driver: "fstab/cifs" fsType: "cifs" secretRef: name: "cifs-secret-sync" options: networkPath: "//win-share/sync" mountOptions: "iocharset=utf8,file_mode=0777,dir_mode=0777" nodeSelector: linux --- kind: Service apiVersion: v1 metadata: name: sync-np namespace: natimark spec: type: NodePort ports: - port: 8888 protocol: TCP targetPort: 8888 nodePort: 32710 selector: app: sync
  2. I've installed Resilio natively from website using manual package download (apollolake), but also tried using official docker image (, both have a 'problem' that after a short window the sync speed caps at around 1-2mb/s. After installing the docker version I happened to increase the ram allocation just trying things that might help and it did, straight way I was getting 20-30mb/s HURRAY! Well I was celebrating too soon, as the next file I tested was back to 1-2mb/sec, looking at the docker overview you can see the ram usage slow rise from 0 to ~10gb (I'd set 12gb max I have 16 installed). Finishing the transfer cleared maybe 100mb of ram and the rest just sits there. All transfers while the allocated ram is full (or around this point of 10/12gb allocated) are at 1-2mb/sec speed, if I restart the docker container then I can get full speed back until I max out the ram again at 10gb. I assume this was the issue also affecting the native install just I couldn't see the numbers as easily. If anyone can help resolve this I would be very grateful been looking at this problem for a week thinking it was my connection I'm currently using a free account so cant submit a support ticket but would happily purchase a license if an answer is out there. EDIT: Decided to buy a licence as on offer and with 30 money back if cant resolve this problem, also appears you can submit tickets with free account... Thanks. Sync version: 2.6.1 (1319) Synology Spec:
  3. Hi, 2.5.13 It seems that the docker version is limited to LAN when not used with --net host argument? is that correct? I guess that's not a docker limitation since i can run a webserver from docker without using this argument and it's not a firewall problem nor a router problem... So did you code Sync like that? Why would it not link primarly with one of your relay server then look for others on the subnet ? So I've consulted your FAQ about your network protocol: So if I'm following this. For docker it's a bit broken since you force users to use port 55555 for TCP data transmission (I do'nt know even why you enforce such a thing since on desktop app we can change the port and I don't see why we couldn't be able to change the prot besides the fact that you didn't resolve the permission to write to a file under linux between a webserver and the OS permission's system). Because the container instance, will contact the tracker with the 55555 port in its contact information but the tracker won't be able to contact him if we diverted the port to another one in the docker command. So it won't never be able to contact the relay, correct? at least that's what seems to happen to me. What I can't understand is that, when I filter the data stream between my different subnets, and if I let through only the port for web ui and for the data stream, it won't be able to connect between point A and point B unless I configured for each sharing folder the specific ip and port. But If I let a free access between point A and point B, then the auto discovery works because both will be detected without a problem. So which port is used for the transaction? a random port and that's why it is impossible to filter efficiently the traffic? Would it be a bit more easier to let the possibility to change the listening port on docker or linux instances?
  4. Hello, I'm running Resilio Sync using the official Docker image (resilio/sync). I can access the UI on port 8888 and everything looks ok. Unfortunately, I receive a 404 for the following resources: /gui/css/style.css /gui/version.json?1521721548733 This makes the UI completely unresponsive. For instance, I can't link any device because the "Link Device" link is broken. I'm a PRO customer.
  5. Hi, I am using the Official Docker container and have multiple mounted_folders setup, for the sake of debugging i will limit that to one. The container is launched via docker-compose, the volume is mounted at launch /start and i can access it with the UI and write data in there (read sync a folder), but all files have root:root ownership. Any idea how to get the ownership of the file within this folder to be the same as the parent mounted folder ? --- According to Docker discussions, the volume will have ownership of the user in the container (which seems to be root here), could this be rlsync instead ? There is a way according to the doc to have this setup with the USER directive in the dockerfile : Docker-compose.yml version: "2" services: sync: image: resilio/sync:latest ports: - "8888:8888" - "55555" volumes: - "./syncdata:/mnt/sync" - "./config:/mnt/sync/config" - "/mnt/myfolder2:/mnt/mounted_folders/myfolder2" restart: always
  6. Hi, I would like to know if it would be better to use the nas version directly on a nas as synology or qnap / OR use the docker version for those who can run docker on their nas ? thanks in advance
  7. Hi Resilio Team Like others in this forum who do embedded Linux builds I need a musl compiled sync client as well. Since docker will switch its default to alpine Linux (that is musl based) [1] I suggest you should seriously consider offering a static compiled binary. If you client is written in golang this is very easily achievable[2]. Of course opensourcing the client would help as well :D. Thanks for your time. Joe [1] [2]
  8. Hi, Are you going to provide a command-line interface for BTSync 2.0? Or an API compatible with 2.0 folders? Or at least a way to deal with 2.0 folders with config file? I love BitTorrent Sync. But I used to deploy peers using Docker containers, it was super easy. Now I can't sync the new folders (2.0) without going through the web GUI... In some use cases, it's not even an option. Right now it seems like I have to keep working with 1.4 folders...
  9. There are also Alpine-based images , but this is not for Alpine fans, as the images are based on Slitaz linux. The result images are quite small (24MB for the latest version 2.4.1). See image attached. Disclaimer: These images are 32-bit versions because my base image `icymatter/slitaz40-minimal` is an 32-bit image) Images on Dockerhub: ( * 1.3: icymatter/slitaz40-btsync13 * 1.4: icymatter/slitaz40-btsync14 * 2.4: icymatter/slitaz40-btsync24 Hope this helps
  10. I just setup resilio on two Macs and a iPhone and everything is in sync. Today i tried my server and used the official docker container. However after around half of my files synced the website is not accessible anymore. My (upload) client recognises no more available peers. Attaching to the container or restarting is not possible, i have to restart docker to gain access to the container. However after 1 or 2 minutes of syncing the connection is lost again and the container is unresponsive. The logs right when the connection is lost: Show/hide second try after docker restart: Show/hide
  11. Hi all, I have troubles automatically syncing directories in my user context with having the Sync instance encapsulated in a Docker container (having a Pro licence) Using the Sync-Dockerfile [1] [2] (afais Ubuntu 15.04), I have setup an instance on a Ubuntu based server and anothe rinstance on my Fedora 23 desktop. the server's data directory points to a btrfs subvolume, i.e., DATA_FOLDER="/data/sync/" > docker run -d --name Sync -p$WEBUI_PORT:8888 -p 55555 -v $DATA_FOLDER:/mnt/sync --restart on-failure bittorrent/sync on the desktop I mount additionally my user's home to be able to sync selectively directories, i.e., into the containers /mnt/mounted_folders path (afais the target path for additional directories in the container's default config) > docker run -d --name Sync -p$WEBUI_PORT:8888 -p 55555 -v $DATA_FOLDER:/mnt/sync -v /home/MYUSER:/mnt/mounted_folders/MYUSER --restart on-failure bittorrent/sync When I add on my desktop a directory from my user's home path, it appears in principle after some time in the server's WebUI and the base directory created in server:/data/sync/folder/... However, no files are being synced and the directory on the server stays empty. On the desktop, a .sync firectory is created being owned by root as the docker daemon's owner. I have already tried to decrease the scan frequency to 300s just in case the file system notifies are not being passed through to the container(??) - but without success and the content of the user folder is not synced. Maybe somebody has an idea for me, how to get the user folder synced? Cheers and thanks, Thomas
  12. Guest

    Btsync As Docker

    hi, I am running rockstor as my NAS, they use docker to run btsync, which is quite nice. nevertheless they lack updating the docker container to the latest version. my question now, why does the sync team not providing docker images with the latest version as they do for linux, windows, mac osx and other platforms and systems? for some NAS systems this would make it so much easier to adopt the official docker image to their installations. thanks for any hints
  13. Hey! I'm trying to create bittorrent volume with docker. It runs perfectly with root but running btsync with root doesn't keep file ownerships in place. So I build small bash script which detects ownership and creates new user with same uid&gid as files before starting btsync. When i try to run btsync with this new user $ sudo -H -u btsync bash -c 'btsync --config /btsync/config --nodaemon' I get this: btsync: /mnt/jenkins/workspace/Build-Sync-x64/linux/breakpad/client/linux/handler/minidump_descriptor.h:55: google_breakpad::MinidumpDescriptor::MinidumpDescriptor(const string&): Assertion `!directory.empty()' failed. My config file: { "device_name": "NAME", "listening_port": 55555, "check_for_updates": true, "use_upnp": true, "pid_file" : "/btsync/", "download_limit": 0, "upload_limit": 0, "shared_folders": [ { "secret": "SECRET", "dir": "/data", "use_relay_server": true, "use_tracker": true, "use_dht": false, "search_lan": true, "use_sync_trash": true } ]} What is wrong?