Lists are one of the more important parts of the Python program language. They’re similar to a standard array in many respects. However, Python’s lists come with a wide variety of special features which make them a joy to work with. For example, we can even have lists stored within a list. Or we can easily remove duplicate elements from within a list. However, Python’s list management is so flexible that we can go about the process in a number of equally effective ways. Consider the following example for removing a duplicate item or duplicate rows in your Python program.
# python remove duplicates from list ourList = [9, 9, 1, 9, 4, 4, 7] looper =  for i in ourList: if i not in looper: looper.append(i) ourList = looper print (ourList)
This is an easy, but not necessarily concise, way to remove duplicates from a linked list. We’re essentially creating a new list and adding elements from the original. If an item is already in the original list element then it won’t be added. Finally, we just reassign the contents of the new list to the old original list. It’s true that by using this method Python remove duplicates from list we are not creating new unique values or distinct elements, because creating unique items would be a very tedious and naive method of removing a duplicate item from your given list or set data structure. But we’re not really using any of the strong points that come from Python list management. This method of doing things isn’t too different from what we could accomplish with any other language. So let’s try something a little more concise and Python focused.
ourList = [9, 9, 1, 9, 4, 4, 7] ourList = list(set(ourList)) print(ourList)
In this example we begin with the same list as before. But this time we simply reassign the initial ourList with the results of a list and set function. A set in Python, by definition, can’t contain a duplicate element. When we pass ourList as a variable to the set function it will automatically reduce everything to the unique elements. The list function will then convert the results back into a list. The final part of that conversion comes from assigning the results back into the original ourList variable.
This might seem like a perfect solution. And it is one of the quickest and easiest ways to remove duplicates from within the dataframe object or data structure. But there is one caveat to using the set method. It doesn’t maintain the original order found in the initial list. This isn’t always a problem. However, it’s something to be aware of when you’re writing Python code.
Let’s look at an example that is a little more complex but nearly as concise. This time around we’ll remove duplicate object items from your set data structure input list through the use of list comprehension. This really leverages Python’s inherent flexibility in a unique function.
ourList = ["The", "quick", "brown", "fox", "fox", "jumped"] ourList = [i for n, i in enumerate(ourList) if i not in ourList[:n]] print(ourList)
In this example we’re using a list composed of strings. However, we have a duplicate “fox” string in there which we’d like to get rid of. But we wouldn’t want the word order to change during the process. So we will instead use one of the most powerful but often overlooked Python functions – enumerate. This function gives us the ability to quickly and easily set up a loop which is automatically tallied during each repetition. We simply compare the results of each loop for repeated words and assign the end product back to the ourList variable. After the print function lists our output we see that the word order has been maintained but the duplicate value of “fox” is gone from the sequence.
In the end, this list comprehension based approach is the most Python focused way to remove duplicates from a list. But at the same time, the fact that we have multiple easy ways to accomplish a relatively advanced task shows the real power of the language. The first example with a loop excels in readability. The second is the most concise way to go about things. And finally, the third example is a nice midway point between concise code and readability.