Java HashSet去重4种方法

1. LinkedHashSet去重

去重后保持原有顺序(重复数据只保留一条)

String[] arr = new String[] {"i", "think", "i", "am", "the", "best"};   
Collection<string> noDups = new LinkedHashSet<string>(Arrays.asList(arr));   
System.out.println("(LinkedHashSet) distinct words:    " + noDups); 

2.  HashSet去重方法一

去重后顺序打乱(重复数据只保留一条)

String[] arr = new String[] {"i", "think", "i", "am", "the", "best"};
Collection<string> noDups = new HashSet<string>(Arrays.asList(arr));   
System.out.println("(HashSet) distinct words:    " + noDups);   

3. HashSet去重方法二

去重后顺序打乱(重复数据只保留一条)

String[] arr = new String[] {"i", "think", "i", "am", "the", "best"};
Set<string> s = new HashSet<string>();   
for (String a : arr)   
{   
 if (!s.add(a))   
 {   
 System.out.println("Duplicate detected: " + a);   
 }   
}   
System.out.println(s.size() + " distinct words: " + s); 

4.  HashSet去重方法三

去重后顺序打乱(相同的数据一条都不保留,取唯一)  

String[] arr = new String[] {"i", "think", "i", "am", "the", "best"};
Set<string> uniques = new HashSet<string>();   
Set<string> dups = new HashSet<string>();   
for (String a : arr)   
{   
 {   
 if (!uniques.add(a))   
 dups.add(a);   
 }   
}   
uniques.removeAll(dups);   
System.out.println("Unique words:    " + uniques);   
System.out.println("Duplicate words: " + dups); 

精彩推荐