Announcement

Collapse
No announcement yet.

SVN for large repos - commit fails: File too large

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • SVN for large repos - commit fails: File too large

    SVN for large repos - commit fails: File too large

    svn, version 1.7.14 (r1542130) on RHEL7.4

    Hi, we want to use SVN for version control of our big (>500GB) CAD/EDA software repository.
    It is said that SVN scales with file and repository size.
    We do have experience with SVN - thats the reason why we want to use it.
    The repository is located on a NFS share.

    Problem: When I "svn commit" a big directory with a lot of files and folders, this error message comes:
    svn: E000027: Commit failed (details follow):
    svn: E000027: Can't open file '/icd_repos/icd/db/transactions/702-ka.txn/node._8l30.0': File too large

    What might be the problemn here ?

  • #2
    See here: [url]https://stackoverflow.com/questions/17315349/subversion-commit-big-file-issue[/url]

    In addition, you might check to see that quota limits are turned off.

    And check your NFS server's limits as well.

    Comment


    • #3
      I tried it on a local repo (local disk) via "svn add" and "svn commit", this failed again with another error after ~3h (error was something like "file is already under version control")
      - anyways -
      next I tried with "svn import" which was succesfull on the local repo. The data was sent into the repo.

      Next step is to doe "svn import" on the NFS repo.

      Comment


      • #4
        You really should be updating your Subversion: V1.7 has been out of support for quite a while. V1.11 just released (although, for a server, I'd consider V1.10 since it is a "long term release").

        Comment


        • #5
          Updating might not be the first choice now as the OS is under corporate control, but will consider it for later.

          A "svn import" on the NFS also fails with the above error "File too large".


          I want to understand how SVN creates file during commit/import to identify the bottleneck here.
          Is SVN creating one big file in the repository , where it lists all comitted files or links to them ?
          If this would be the case, the a file size limit on the NFS is causing the error.

          Comment


          • #6
            What exact command are you using?

            If you are using "svn import somedir http://somehost/svn/repoName/someplace" then that will go through Apache and be governed by the "LimitXMLRequestBody" Apache tunable (default is 1MB) as described in that stackoverflow article I pointed to.

            If you are using "svn import somedir file:///path/to/repoName/someplace" then that will not involve Apache (it will need to be run on the server host). So any failure is likely to be a file system limit.

            Comment


            • #7
              I am using either "file:///path_to_repo" or "svn://server:/path_to_repo".
              Both are working on the local repo, but not on the NFS repo - therefore I conclude its a limit issue on the NFS.

              Was is the maximum file size SVN is creating ?
              Is it dependent on the biggest file I commit to the repo - or - is it depending on the complete size of the repo (e.g. creating one big file that lists al lthe content in the repo somehow) ?



              Comment


              • #8
                I am not a committer and do not have a deep internal understanding of the Subversion code.

                That said, if you change multiple files in a single revision ("svn checkin", etc.) then all of those changes are combined and stored into a single file (e.g. "/path/to/repo/revs/<hash>/<revisionFile>"). A "properties" file holds additional information.

                Beyond that, eventually, if you choose to do so and the repository is of a sufficiently recent format, the revisions themselves can be "packed" into "shards" as specified in the "db/format" file (normally 1000 revisions per shard). This is done to speed up the checkout process.

                What you should do to see just how large the "final" file is, is to peruse the repository storage tree. Specifically look at the files in the "revs" tree.

                During an import the transaction itself could create a very large file (temporarily, in the "db/transactions" directory). I kind of doubt that it is this file that is the issue.

                You could determine exactly what's going on by attaching the "strace" command to the "svn client" itself when using the "file:///" URL type and watch when the write fails. Make sure to have the output of "strace" go to a file (use the "-o" option). For the purposes of this exercise you could limit the output to handling the "open()" and "write()" system calls (well, probably).

                Comment


                • #9
                  Hi,
                  I found (du -ah . | sort -nr | head) the biggest file inside the local repository is 70GB.
                  I gues that is the reason why the commit to NFS fails.

                  Next step is to force the creation of a file with 70GB on the NFS to see if the error occurs again.

                  Comment


                  • #10
                    To test the filesize limit on the NFS, I simply copied the 20GB lokal repofile to the NFS via:

                    cp ./icd_tech/db/revs/0/1 /icd_repos/

                    and it was working.

                    File size on NFS seems not to cause the SVN error:
                    svn: E000027: Commit failed (details follow):
                    svn: E000027: Can't open file '/icd_repos/icd/db/transactions/702-ka.txn/node._8l30.0': File too large

                    Must be someting due to SVN process...

                    Comment


                    • #11
                      If the largest file is 70GB then please test with 70GB (not 20GB).

                      To see what's happening and when, your best bet would be to use the "file:///" URL type and the "strace" command. Then find the failure in the strace log file.

                      Comment


                      • #12
                        Per the "open(2)" man page:

                        [B]EOVERFLOW[/B] [I]pathname[/I] refers to a regular file that is too large to be opened. The usual scenario here is that an application compiled on a 32-bit platform without [I]-D_FILE_OFFSET_BITS=64[/I] tried to open a file whose size exceeds [I](1<<31)-1[/I] bytes; see also [B]O_LARGEFILE [/B]above. This is the error specified by POSIX.1; in kernels before 2.6.24, Linux gave the error [B]EFBIG[/B] for this case.

                        Comment


                        • #13
                          Copying the 70GB file to the NFS location also works fine.

                          I will try the strace and file:/// combination, as soon as I learned the strace-usage.

                          Before, I want to report that the same error occured, when I "svnsync" the local repo to an NFS located repo (using the 20GB repos):
                          svnsync: E000027: Can't open file '/icd_repos/icd_backup_repos/icd_tech/db/transactions/0-0.txn/node._e51k.0.children': File too large

                          Again, writing to the NFS via an svn process causes the error. Unfortunately I cannot figure out the file size of the above file, because the file is not existent anymore.


                          Comment


                          • #14
                            Here is the output of the command:

                            # strace -e trace=open,read,write -o svnsync_nfs_strace.log svnsync sync file:///icd_repos/icd_backup_repos/icd_tech/ file:///icd_repos_local/icd_tech/


                            The last 30-40 lines of the logfile are:

                            # tail -n 30 svnsync_nfs_strace.log | head -n 10
                            read(4, "", 4096) = 0
                            read(4, "", 4096) = 0
                            read(4, "", 4096) = 0
                            read(4, "", 4096) = 0
                            read(4, "", 4096) = 0
                            open("/icd_repos/icd_backup_repos/icd_tech/db/transactions/0-0.txn/node._e51k.0.children", O_WRONLY|O_CREAT|O_CLOEXEC, 0666) = -1 EFBIG (File too large)
                            open("/icd_repos/icd_backup_repos/icd_tech/db/revprops/0/0", O_RDONLY|O_CLOEXEC) = 4
                            read(4, "K 8\nsvn:date\nV 27\n2018-11-09T09:"..., 4096) = 336
                            read(4, "", 4096) = 0
                            read(4, "", 4096) = 0

                            The "open()" function fails ...

                            Comment


                            • #15
                              Did you check to see what the size of that file ("/icd_repos/icd_backup_repos/icd_tech/db/transactions/0-0.txn/node._e51k.0.children") was?

                              Comment

                              Working...
                              X