PEP 270 – uniq method for list objects
- Author:
- Jason Petrone <jp at demonseed.net>
- Status:
- Rejected
- Type:
- Standards Track
- Created:
- 21-Aug-2001
- Python-Version:
- 2.2
- Post-History:
Table of Contents
Notice
This PEP is withdrawn by the author. He writes:
Removing duplicate elements from a list is a common task, but there are only two reasons I can see for making it a built-in. The first is if it could be done much faster, which isn’t the case. The second is if it makes it significantly easier to write code. The introduction ofsets.py
eliminates this situation since creating a sequence without duplicates is just a matter of choosing a different data structure: a set instead of a list.
As described in PEP 218, sets are being added to the standard library for Python 2.3.
Abstract
This PEP proposes adding a method for removing duplicate elements to the list object.
Rationale
Removing duplicates from a list is a common task. I think it is useful and general enough to belong as a method in list objects. It also has potential for faster execution when implemented in C, especially if optimization using hashing or sorted cannot be used.
On comp.lang.python there are many, many, posts [1] asking about the best way to do this task. It’s a little tricky to implement optimally and it would be nice to save people the trouble of figuring it out themselves.
Considerations
Tim Peters suggests trying to use a hash table, then trying to sort, and finally falling back on brute force [2]. Should uniq maintain list order at the expense of speed?
Is it spelled ‘uniq’ or ‘unique’?
Reference Implementation
I’ve written the brute force version. It’s about 20 lines of code
in listobject.c
. Adding support for hash table and sorted
duplicate removal would only take another hour or so.
References
Copyright
This document has been placed in the public domain.
Source: https://github.com/python/peps/blob/main/peps/pep-0270.rst
Last modified: 2023-09-09 17:39:29 GMT