Multitran Translation Scraper Already Operational!

I was able to build a translation scraper from scratch in less than a day. It uses Requests to download a url, Beautiful Soup to parse the html, and its select function to find all translations on a Multitran (Мультитран) entry page. It wasn't the easiest project because Multitran is still using 90s style tables without a lot of markup to hook onto. But I found a way by carefully studying the hyperlinks of the anchor tags of the translation entries. So that's our secret sauce. For now, it outputs the translations to the console; later, I'll make an array out of them to replace non-desired translations of Russian terms with the desire one.

import requests
from bs4 import BeautifulSoup

url = ''
# edit url manually until import function is developed
r = requests.get(url)
print r.status_code
#a status code of 200 means that everything is okay
soup = BeautifulSoup(r.content, 'html.parser')

translations ="a[href*=m.exe?t=]")
#the secret sauce

for translation in translations:
print translation.text
#prints out all translations

#That's a wrap!
# Copyright Peter Charles Gleason, 2017

Onward and upward!