Huffman and adaptive Huffman are examples of coding, which takes advantage of a statistical skew in the probabilities of the symbols to code them into as few bits as possible. (There are other types of coding, such as arithmetic, range, and asymmetric numeral systems.)
Lempel-Ziv is an example of modeling, which takes redundancy found in the particular kind of data being compressed, in this case text, and converts it into a series of symbols suitable for coding. Lempel-Ziv works on the assumption that strings of various lengths will oft be repeated in the text, which is the case for natural languages.
That assumption doesn't work at all for audio or image files, where the redundancy takes very different forms. There transforms are performed on the data to separate out components by frequency as part of the modeling. Also lossy compression is acceptable for both audio and image data to be consumed by humans, where data can be decimated or discarded depending on where it falls in the frequency domain, as well as using other ways to take advantage of psycho-acoustic or psycho-visual effective redundancy.
Once that sort of modeling is done, then similar coding can be applied to code the resulting symbols into a minimally sized stream of bits.
Compression consists of modeling, which is highly dependent on the type of data to be compressed, as well as the consumer of the data in the case of lossy compression, followed by coding, which compresses the resulting information into a compressed bit stream.