old-www/HOWTO/Ext2fs-Undeletion-3.html

43 lines
1.9 KiB
HTML

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<HTML>
<HEAD>
<META NAME="GENERATOR" CONTENT="SGML-Tools 1.0.9">
<TITLE>Linux Ext2fs Undeletion mini-HOWTO: What recovery rate can I expect?</TITLE>
<LINK HREF="Ext2fs-Undeletion-4.html" REL=next>
<LINK HREF="Ext2fs-Undeletion-2.html" REL=previous>
<LINK HREF="Ext2fs-Undeletion.html#toc3" REL=contents>
</HEAD>
<BODY>
<A HREF="Ext2fs-Undeletion-4.html">Next</A>
<A HREF="Ext2fs-Undeletion-2.html">Previous</A>
<A HREF="Ext2fs-Undeletion.html#toc3">Contents</A>
<HR>
<H2><A NAME="s3">3. What recovery rate can I expect?</A></H2>
<P>That depends. Among the problems with recovering files on a
high-quality, multi-tasking, multi-user operating system like Linux is that
you never know when someone wants to write to the disk. So when the
operating system is told to delete a file, it assumes that the blocks used
by that file are fair game when it wants to allocate space for a new file.
(This is a specific example of a general principle for Unix-like systems:
the kernel and the associated tools assume that the users aren't idiots.)
In general, the more usage your machine gets, the less likely you are to be
able to recover files successfully.
<P>Also, disk fragmentation can affect the ease of recovering files. If the
partition containing the deleted files is very fragmented, you are unlikely to
be able to read a whole file.
<P>If your machine, like mine, is effectively a single-user workstation, and
you weren't doing anything disk-intensive at the fatal moment of deleting
those files, I would expect a recovery rate in the same ball-park as
detailed above. I retrieved nearly 94% of the files (and these were
binary files, please note) undamaged. If you get 80% or better, you
can feel pretty pleased with yourself, I should think.
<P>
<P>
<HR>
<A HREF="Ext2fs-Undeletion-4.html">Next</A>
<A HREF="Ext2fs-Undeletion-2.html">Previous</A>
<A HREF="Ext2fs-Undeletion.html#toc3">Contents</A>
</BODY>
</HTML>