有个内含单词的超大文本文件,给定任意两个单词,找出在这个文件中这两个单词的最短距离(相隔单词数)。如果寻找过程在这个文件中会重复多次,而每次寻找的单词不同,你能对此优化吗?
示例:
输入:words = ["I","am","a","student","from","a","university","in","a","city"], word1 = "a", word2 = "student"
输出:1
提示:
- words.length <= 100000
Python 解答:
1.双指针
class Solution:
def findClosest(self, words: List[str], word1: str, word2: str) -> int:
i, j = 0, 0
value = float('inf')
while i < len(words):
if words[i] == word1 or words[i] == word2:
if words[i] != words[j] and j != 0:
value = min(value, j-i)
j = i
i += 1
return value
2.哈希
class Solution:
def findClosest(self, words: List[str], word1: str, word2: str) -> int:
lis1, lis2 = [], []
for i in range(len(words)):
if words[i] == word1:
lis1.append(i)
elif words[i] == word2:
lis2.append(i)
if lis1[-1] < lis2[0]:
return lis2[0]-lis1[-1]
elif lis2[-1] < lis1[0]:
return lis1[0]-lis2[-1]
i, j = 0, 0
value = float('inf')
while i < len(lis1) and j < len(lis2):
if lis1[i] < lis2[j]:
value = min(value, lis2[j]-lis1[i])
i += 1
else:
value = min(value, lis1[i]-lis2[j])
j += 1
return value
留言