Bugzilla – Bug 12451
ERROR: Database update process failed: Up-to-date (1)
Last modified: 2021-03-22 05:41:25 EDT
rescue: clamav 0.102.1-x86_64-1pm on Slackware Current64, Linux 5.4.2-burdi64, glibc 2.30-x86_64-1, zlib 1.2.11-x86_64-2 pmagic: clamav 0.102.0-i586-1_pmagic on Slackware 14.2 rebuilt, Linux 5.3.6-pmagic_64, glibc 2.23-i586-1_pmagic, zlib 1.2.11-i486-1_pmagic On "rescue" freshclam fails in the morning with: ClamAV update process started at Thu Dec 12 12:05:25 2019 daily database available for download (remote version: 25661) WARNING: Mirror https://database.clamav.net is not synchronized. ERROR: Unexpected error when attempting to update database: daily WARNING: fc_update_databases: fc_update_database failed: Up-to-date (1) ERROR: Database update process failed: Up-to-date (1) ERROR: Update failed. Whereas "pmagic" then succeeds: ClamAV update process started at Thu Dec 12 11:58:23 2019 WARNING: DNS record is older than 3 hours. Reading CVD header (daily.cvd): OK daily database available for download (remote version: 25660) Testing database: '/var/lib/clamav/tmp/clamav-29229204280b2f8526e7762f689fb8eb.tmp-daily.cvd' ... Testing database: '/var/lib/clamav/tmp/clamav-29229204280b2f8526e7762f689fb8eb.tmp-daily.cvd' ... Database test passed. daily.cvd updated (version: 25660, sigs: 2043646, f-level: 63, builder: raynman) Reading CVD header (main.cvd): OK main database available for download (remote version: 59) Testing database: '/var/lib/clamav/tmp/clamav-8e6a468ef75cf33c9419be07b54faa6c.tmp-main.cvd' ... Testing database: '/var/lib/clamav/tmp/clamav-8e6a468ef75cf33c9419be07b54faa6c.tmp-main.cvd' ... Database test passed. main.cvd updated (version: 59, sigs: 4564902, f-level: 60, builder: sigmgr) Reading CVD header (bytecode.cvd): OK bytecode database available for download (remote version: 331) Testing database: '/var/lib/clamav/tmp/clamav-5a79f66ff7127611175e17e25048ae14.tmp-bytecode.cvd' ... Testing database: '/var/lib/clamav/tmp/clamav-5a79f66ff7127611175e17e25048ae14.tmp-bytecode.cvd' ... Database test passed. bytecode.cvd updated (version: 331, sigs: 94, f-level: 63, builder: anvilleg) Note the different daily database versions: rescue 25661, pmagic 25660 In the afternoon both succeed with the daily 25661. I already built clamav 0.103.e17296f but will have to wait till tomorrow to do the test.
Created attachment 7619 [details] clamconf -n for "rescue"
Created attachment 7620 [details] clamconf -n for "pmagic"
Note that both "rescue" and "pmagic" are live ISOs with /var/lib/clamav symlinking to an empty /etc/clamav c.q. /root/clamav directory.
This morning, after confirming that with clamav 0.102.1 freshclam still failed, I tested with clamav 0.103.e17296f: the problem persists ...
I will investigate. This is the second time recently that we've seen a report on this issue. Last time we cleared it up by purging the CloudFlare download cache, though this is also automatically done every time new databases are posted.
I couldn't reproduce the issue locally (eastern US). It could be a regional issue today that you're experiencing. Where are you connecting from?
I connect from Europe - The Netherlands. Be aware though that I see this problem only from 11:00 through 12:00 am CET or possibly even later. At 15:00 pm CET things are ok again. Be also aware that when I see this problem on "rescue" "pmagic" is OK. As I said before both are live ISOs. I see the problem in VirtualBox VM mode as well as in "real" mode. All my internet accesses are in NAT mode. :D
All my boxes access the internet in NAT mode from behind the same router and hence via the same "real" IP address.
Yesterday at 12:10 am CET I noticed that "rescue" did not show the problem, but that "pmagic" did. I have not been able to reaffirm this observation today. Using the discretionary persistence as provided by the "rescue" as well as the "pmagic" live ISOs I saved yesterday at 12:15 am CET the contents of the /root and /etc directories *). This morning at 11:00 am CET I found that with this saved data "rescue" did not show the problem. However as soon as I cleared the /etc/clamav directory the problem was back. From the findings until now I am inclined to deduce that during the error window (10:00 - 12:00 on "pmagic") the comparison of the server's daily database with the local one does not take in account the possibility that the local daily may not exist. I have to guess what causes this error window (database update on the server?) and why the comparison does not trigger the problem outside the error window. :D *) For a good understanding remember that /var/lib/clamav symlinks to /etc/clamav.
Correction: the error window (10:00 - 12:00 on "rescue").
This morning Ops changed a part of signature deployment to adjust the delays between deploying new databases and clearing the Cloudflare cache. Hopefully this issue will be resolved. Please let me know how things go in the coming day(s).
"rescue" : Dec 17, 2019 times in CET /var/lib/clamav --> /etc/clamav empty: /etc/clamav is empty saved: /etc/clamav contains bytecode.cvd, daily.cvd and main.cvd from Dec 15, 2019 empty saved ===== ===== 09:00 ok ok 10:05 ok ok 10:59 ok ok 12:04 error ok <=== 13:03 ok ok 15:07 ok ok
"rescue" : Dec 18, 2019 times in CET /var/lib/clamav --> /etc/clamav empty: /etc/clamav is empty saved: /etc/clamav contains bytecode.cvd, daily.cvd and main.cvd from Dec 15, 2019 empty saved ===== ===== 12:00 error ok
Sorry about the delay. Thanks for following up to report that the issue still occurs for fresh downloads around the time of the new database. I would like to add some lenience into freshclam to alleviate the error, as it doesn't seem that the CloudFlare cache clearing options are able to resolve the issue. Will update the ticket when I have something further.
Just merged a commit to address this issue: For 0.102.2: https://github.com/Cisco-Talos/clamav-devel/commit/01407518b5c0967239e044db396b361a362fb658 For 0.103: https://github.com/Cisco-Talos/clamav-devel/commit/9e2be47cb220fae6662b1e6089d7442bb4a09ece
Confirmed for 0.102.2/25709. Whereas for 0.102.1 the database updated failed, for 0.102.2/25709 it succeeded with an empty database as well as with a previous one. For 0.103 I had a build failure. :D
Yup! Sorry about that, we broke the build yesterday by accident. It turns out one commit moved a variable that the next commit made use of. Just fixed it this morning. https://github.com/Cisco-Talos/clamav-devel/commit/f450e57c4815ddbfffbfa06ce6356e5c897abaef
*** Bug 12440 has been marked as a duplicate of this bug. ***
The new build was successful. Waiting for 0.102.1 to fail ... 14:40 CET: The database update for 0.102.1 failed, whereas for 0.103.0-devel-2020130/25710 it succeeded. Thanks and regards, Dick :D