MPEG compression is a "lossy" data compression method. Algorithms such as Lempel-Ziv-Welch closely examine a data stream looking for ways to perfectly represent the stream using different symbols from the original in order to reduce the volume of data required to reproduce the information. Depending greatly on the actual symbols used to represent the original stream, LZW-based and similar compression utilities can generally be expected to reduce the size of a file by about a factor of two, and the original data stream can be recovered from the new, smaller file perfectly without the loss of a single bit. That is a major reason why zip, arj, gzip, etc. are ubiquitously used to transfer programs and data around the internet. These algorithms could be applied as well to photos, music, and video, but doing so would only reduce a 3 Gbps HD video stream to about 1.5 Gbps. That is not anywhere nearly good enough to make storing and transferring digital music, photos, and video on the internet or over the air, for that matter, practical.
Fortunately, if we analyze things like photos and video in a different way, there tend to be large areas that are approximately repetitive. In addition, significant amounts of both visual and audible data can be discarded without the human brain being able to tell the difference, or at least not very much or very often. Because of this, we can employ "lossy" compression algorithms to translate the music, photos, or video into much lower bandwidth bitstreams or much smaller files. The recovered output is not at all identical to the original data, but it is close enough for our purposes.
The question is, "How much data do we throw away?" The answer to that depends on several variables. The first is the nature of the original data. Video (sans the audio) tends to lend itself to greater compression than any other data of which I am aware. Large areas of the screen may be approximately duplicated from one page of video to the next. One relies on the fact the video only changes a little bit from one frame to the next, so the data stream reproduces the picture in full, but then subsequent frames are only represented in the data stream by the changes from the previous page, not the entire page itself. By choosing prudently what we consider enough of a difference how much of a change needs to be represented in the picture at all, we can reduce the data needed to reproduce the next page. Where the line between "prudent" and "imprudent" lies depends upon how much degradation of picture quality we are willing to accept. If one has lower standards, the compression can be greater. As Soapm mentioned, this may depend on the quantity of alcohol one has consumed.
Another variable is the amount of processing power and time available. Analyzing the data more extensively can produce a much tighter bitstream for qualitatively virtually the same results. If one has a monster CPU array with unlimited time, comparatively rather small bitstreams can be created that produce very pleasing results. If one must compress the data in real time with a reasonably economical processor, the stream is going to require rather more bandwidth.
Of course, the actual content itself has much to do with it. Some video lends itself to greater compression and other video to less without unpleasant artifacts in the output. Other than changing the resolution, the individual doing the authoring doesn't really have much control over the content, but the other factors are much more within his control. That said, most people are gong to want (or absolutely need) to "set it and forget it" when it comes to the compression parameters, and may have rather limited patience in terms of the amount of time required to get things done. (Some compression utilities also have very limited amounts of control offered to the user.) That being the case, rather than recode every movie or at least part of it, observe the PQ, and adjust the compression parameters, one usually just quickly finds a set of compression parameters that almost always produced good results and then sticks with those parameters. That is fine for most of us, but if space is a major concern, one may wish to fiddle with the compression on an individual basis to produce the tightest practical bitstream.
So yes, 2 hours or even significantly more can fit on a DVD, but one must make compromises in one or more areas to achieve it.