PEP 270 – uniq method for list objects
- Jason Petrone <jp at demonseed.net>
- Standards Track
This PEP is withdrawn by the author. He writes:
Removing duplicate elements from a list is a common task, but there are only two reasons I can see for making it a built-in. The first is if it could be done much faster, which isn’t the case. The second is if it makes it significantly easier to write code. The introduction of
sets.pyeliminates this situation since creating a sequence without duplicates is just a matter of choosing a different data structure: a set instead of a list.
As described in PEP 218, sets are being added to the standard library for Python 2.3.
This PEP proposes adding a method for removing duplicate elements to the list object.
Removing duplicates from a list is a common task. I think it is useful and general enough to belong as a method in list objects. It also has potential for faster execution when implemented in C, especially if optimization using hashing or sorted cannot be used.
On comp.lang.python there are many, many, posts  asking about the best way to do this task. It’s a little tricky to implement optimally and it would be nice to save people the trouble of figuring it out themselves.
Tim Peters suggests trying to use a hash table, then trying to sort, and finally falling back on brute force . Should uniq maintain list order at the expense of speed?
Is it spelled ‘uniq’ or ‘unique’?
I’ve written the brute force version. It’s about 20 lines of code
listobject.c. Adding support for hash table and sorted
duplicate removal would only take another hour or so.
This document has been placed in the public domain.
Last modified: 2023-09-09 17:39:29 GMT