Next: Interpreter vs. Compiler
Up: Introduction
Previous: Introduction
  Contents
History
In the early days of UNIX, even simple tasks such as scanning text,
extracting strings, and printing reports required the use of more than
one program, and, accordingly, the number of people who were interested
in learning these skills was limited. Programs such as awk,
sed, and grep were hard enough to learn, and matters
were made even more complicated by the fact that these programs needed
to be tied together with shell scripts. Not only were there several dialects
of shell scripts, each with subtle differences, but stringing together several
programs in this fashion often proved to be too inefficient for practical
use, and the entire task would have to be rewritten in C. In an
environment
such as this, it's not surprising that even sophisticated users were frustrated,
and ordinary users felt that such tasks were well beyond their limits.
In the late 1980's, Larry Wall, then a programmer at the Jet Propulsion
Laboratory in Pasadena, California, decided that the best features of all the
popular programs and shell scripts could be incorporated into a single
language. He chose the name perl, partly as an acronym for ``Practical
Extraction and Report Language,'' and partly because he just liked the sound
of the word. By using constructs that were already familiar to thousands of
programmers, he felt that he could create a program which many people could
start using with little additional training, since the general ideas already
existed in existing programs. By incorporating all of the features in a
single language, he could produce a far more efficient solution to problems
than was currently available, as well as provide a single, centralized
program that would be easier for even ordinary users to master. Although
these goals are lofty, most users of Perl agree that they have, at least to
a very large extent, been met.
Next: Interpreter vs. Compiler
Up: Introduction
Previous: Introduction
  Contents
Phil Spector
2002-10-18