Bug 12451 - ERROR: Database update process failed: Up-to-date (1)
ERROR: Database update process failed: Up-to-date (1)
Status: RESOLVED FIXED
Product: ClamAV
Classification: ClamAV
Component: freshclam
stable
x86_64 GNU/Linux
: P3 normal
: 0.101.0
Assigned To: Micah Snyder
:
: 12440 (view as bug list)
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2019-12-12 10:28 EST by D.L.C. Burggraaff
Modified: 2021-03-22 05:41 EDT (History)
3 users (show)

See Also:
QA Contact:


Attachments
clamconf -n for "rescue" (1.92 KB, text/plain)
2019-12-12 10:29 EST, D.L.C. Burggraaff
no flags Details
clamconf -n for "pmagic" (1.61 KB, text/plain)
2019-12-12 10:31 EST, D.L.C. Burggraaff
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description D.L.C. Burggraaff 2019-12-12 10:28:11 EST
rescue: clamav 0.102.1-x86_64-1pm on Slackware Current64, Linux 5.4.2-burdi64, glibc 2.30-x86_64-1, zlib 1.2.11-x86_64-2
pmagic: clamav 0.102.0-i586-1_pmagic on Slackware 14.2 rebuilt, Linux 5.3.6-pmagic_64, glibc 2.23-i586-1_pmagic, zlib 1.2.11-i486-1_pmagic

On "rescue" freshclam fails in the morning with:
ClamAV update process started at Thu Dec 12 12:05:25 2019
daily database available for download (remote version: 25661)
WARNING: Mirror https://database.clamav.net is not synchronized.
ERROR: Unexpected error when attempting to update database: daily
WARNING: fc_update_databases: fc_update_database failed: Up-to-date (1)
ERROR: Database update process failed: Up-to-date (1)
ERROR: Update failed.

Whereas "pmagic" then succeeds:
ClamAV update process started at Thu Dec 12 11:58:23 2019
WARNING: DNS record is older than 3 hours.
Reading CVD header (daily.cvd): OK
daily database available for download (remote version: 25660)
Testing database: '/var/lib/clamav/tmp/clamav-29229204280b2f8526e7762f689fb8eb.tmp-daily.cvd' ...
Testing database: '/var/lib/clamav/tmp/clamav-29229204280b2f8526e7762f689fb8eb.tmp-daily.cvd' ...
Database test passed.
daily.cvd updated (version: 25660, sigs: 2043646, f-level: 63, builder: raynman)
Reading CVD header (main.cvd): OK
main database available for download (remote version: 59)
Testing database: '/var/lib/clamav/tmp/clamav-8e6a468ef75cf33c9419be07b54faa6c.tmp-main.cvd' ...
Testing database: '/var/lib/clamav/tmp/clamav-8e6a468ef75cf33c9419be07b54faa6c.tmp-main.cvd' ...
Database test passed.
main.cvd updated (version: 59, sigs: 4564902, f-level: 60, builder: sigmgr)
Reading CVD header (bytecode.cvd): OK
bytecode database available for download (remote version: 331)
Testing database: '/var/lib/clamav/tmp/clamav-5a79f66ff7127611175e17e25048ae14.tmp-bytecode.cvd' ...
Testing database: '/var/lib/clamav/tmp/clamav-5a79f66ff7127611175e17e25048ae14.tmp-bytecode.cvd' ...
Database test passed.
bytecode.cvd updated (version: 331, sigs: 94, f-level: 63, builder: anvilleg)

Note the different daily database versions: rescue 25661, pmagic 25660

In the afternoon both succeed with the daily 25661.

I already built clamav 0.103.e17296f but will have to wait till tomorrow to do the test.
Comment 1 D.L.C. Burggraaff 2019-12-12 10:29:25 EST
Created attachment 7619 [details]
clamconf -n for "rescue"
Comment 2 D.L.C. Burggraaff 2019-12-12 10:31:00 EST
Created attachment 7620 [details]
clamconf -n for "pmagic"
Comment 3 D.L.C. Burggraaff 2019-12-12 11:06:11 EST
Note that both "rescue" and "pmagic" are live ISOs with /var/lib/clamav symlinking to an empty /etc/clamav c.q. /root/clamav directory.
Comment 4 D.L.C. Burggraaff 2019-12-13 05:45:52 EST
This morning, after confirming that with clamav 0.102.1 freshclam still failed, I tested with clamav 0.103.e17296f: the problem persists ...
Comment 5 Micah Snyder 2019-12-13 13:36:21 EST
I will investigate.  This is the second time recently that we've seen a report on this issue.  Last time we cleared it up by purging the CloudFlare download cache, though this is also automatically done every time new databases are posted.
Comment 6 Micah Snyder 2019-12-13 14:07:59 EST
I couldn't reproduce the issue locally (eastern US).  It could be a regional issue today that you're experiencing.  Where are you connecting from?
Comment 7 D.L.C. Burggraaff 2019-12-13 18:47:03 EST
I connect from Europe - The Netherlands.
Be aware though that I see this problem only from 11:00 through 12:00 am CET or possibly even later. At 15:00 pm CET things are ok again.
Be also aware that when I see this problem on "rescue" "pmagic" is OK.
As I said before both are live ISOs. I see the problem in VirtualBox VM mode as well as in "real" mode. All my internet accesses are in NAT mode.
:D
Comment 8 D.L.C. Burggraaff 2019-12-13 19:01:37 EST
All my boxes access the internet in NAT mode from behind the same router and hence via the same "real" IP address.
Comment 9 D.L.C. Burggraaff 2019-12-16 07:55:14 EST
Yesterday at 12:10 am CET I noticed that "rescue" did not show the problem, but that "pmagic" did. I have not been able to reaffirm this observation today.

Using the discretionary persistence as provided by the "rescue" as well as the "pmagic" live ISOs I saved yesterday at 12:15 am CET the contents of the /root and /etc directories *). This morning at 11:00 am CET I found that with this saved data "rescue" did not show the problem. However as soon as I cleared the /etc/clamav directory the problem was back.

From the findings until now I am inclined to deduce that during the error window (10:00 - 12:00 on "pmagic") the comparison of the server's daily database with the local one does not take in account the possibility that the local daily may not exist. I have to guess what causes this error window (database update on the server?) and why the comparison does not trigger the problem outside the error window.

:D

*) For a good understanding remember that /var/lib/clamav symlinks to /etc/clamav.
Comment 10 D.L.C. Burggraaff 2019-12-16 08:01:54 EST
Correction: the error window (10:00 - 12:00 on "rescue").
Comment 11 Micah Snyder 2019-12-16 17:48:10 EST
This morning Ops changed a part of signature deployment to adjust the delays between deploying new databases and clearing the Cloudflare cache.  Hopefully this issue will be resolved.  Please let me know how things go in the coming day(s).
Comment 12 D.L.C. Burggraaff 2019-12-17 09:25:10 EST
"rescue" : Dec 17, 2019
times in CET
/var/lib/clamav --> /etc/clamav
empty: /etc/clamav is empty
saved: /etc/clamav contains bytecode.cvd, daily.cvd and main.cvd from Dec 15, 2019

        empty   saved
        =====   =====
09:00     ok      ok
10:05     ok      ok
10:59     ok      ok
12:04   error     ok   <===
13:03     ok      ok
15:07     ok      ok
Comment 13 D.L.C. Burggraaff 2019-12-18 06:03:14 EST
"rescue" : Dec 18, 2019
times in CET
/var/lib/clamav --> /etc/clamav
empty: /etc/clamav is empty
saved: /etc/clamav contains bytecode.cvd, daily.cvd and main.cvd from Dec 15, 2019

        empty   saved
        =====   =====
12:00   error     ok
Comment 14 Micah Snyder 2019-12-24 14:00:54 EST
Sorry about the delay.  Thanks for following up to report that the issue still occurs for fresh downloads around the time of the new database.  

I would like to add some lenience into freshclam to alleviate the error, as it doesn't seem that the CloudFlare cache clearing options are able to resolve the issue.

Will update the ticket when I have something further.
Comment 16 D.L.C. Burggraaff 2020-01-29 09:12:52 EST
Confirmed for 0.102.2/25709.
Whereas for 0.102.1 the database updated failed, for 0.102.2/25709 it succeeded with an empty database as well as with a previous one.

For 0.103 I had a build failure.
:D
Comment 17 Micah Snyder 2020-01-29 12:15:16 EST
Yup! Sorry about that, we broke the build yesterday by accident.  It turns out one commit moved a variable that the next commit made use of.  

Just fixed it this morning. https://github.com/Cisco-Talos/clamav-devel/commit/f450e57c4815ddbfffbfa06ce6356e5c897abaef
Comment 18 Micah Snyder 2020-01-29 12:29:22 EST
*** Bug 12440 has been marked as a duplicate of this bug. ***
Comment 19 D.L.C. Burggraaff 2020-01-30 08:43:50 EST
The new build was successful. Waiting for 0.102.1 to fail ...

14:40 CET: The database update for 0.102.1 failed, whereas for 0.103.0-devel-2020130/25710 it succeeded.

Thanks and regards, Dick
:D