Update upstream sources to 0.10.0
This commit is contained in:
parent
7028ebaf83
commit
1f21305b7c
|
@ -1,4 +1,5 @@
|
|||
.coverage
|
||||
coverage/
|
||||
.installed.cfg
|
||||
engines.cfg
|
||||
env
|
||||
|
|
|
@ -16,11 +16,10 @@ install:
|
|||
- ./manage.sh update_dev_packages
|
||||
- pip install coveralls
|
||||
script:
|
||||
- ./manage.sh pep8_check
|
||||
- ./manage.sh styles
|
||||
- ./manage.sh grunt_build
|
||||
- ./manage.sh tests
|
||||
- ./manage.sh py_test_coverage
|
||||
- ./manage.sh robot_tests
|
||||
after_success:
|
||||
coveralls
|
||||
notifications:
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
Searx was created by Adam Tauber and is maintained by Adam Tauber and Alexandre Flament.
|
||||
Searx was created by Adam Tauber and is maintained by Adam Tauber, Alexandre Flament and Noémi Ványi.
|
||||
|
||||
Major contributing authors:
|
||||
|
||||
|
@ -7,6 +7,7 @@ Major contributing authors:
|
|||
- Thomas Pointhuber
|
||||
- Alexandre Flament `@dalf <https://github.com/dalf>`_
|
||||
- @Cqoicebordel
|
||||
- Noémi Ványi
|
||||
|
||||
People who have submitted patches/translates, reported bugs, consulted features or
|
||||
generally made searx better:
|
||||
|
@ -39,15 +40,21 @@ generally made searx better:
|
|||
- @underr
|
||||
- Emmanuel Benazera
|
||||
- @GreenLunar
|
||||
- Noemi Vanyi
|
||||
- Kang-min Liu
|
||||
- Kirill Isakov
|
||||
- Guilhem Bonnefille
|
||||
- Marc Abonce Seguin
|
||||
|
||||
- @jibe-b
|
||||
- Christian Pietsch @pietsch
|
||||
- @Maxqia
|
||||
- Ashutosh Das @pyprism
|
||||
- YuLun Shih @imZack
|
||||
- Dmitry Mikhirev @mikhirev
|
||||
- David A Roberts `@davidar <https://github.com/davidar>`_
|
||||
- Jan Verbeek @blyxxyz
|
||||
- Ammar Najjar @ammarnajjar
|
||||
- @stepshal
|
||||
- François Revol @mmuman
|
||||
- marc @a01200356
|
||||
- Harry Wood @harry-wood
|
||||
- Thomas Renard @threnard
|
||||
|
|
|
@ -1,3 +1,38 @@
|
|||
0.10.0 2016.09.06
|
||||
=================
|
||||
|
||||
- New engines
|
||||
|
||||
- Archive.is (general)
|
||||
- INA (videos)
|
||||
- Scanr (science)
|
||||
- Google Scholar (science)
|
||||
- Crossref (science)
|
||||
- Openrepos (files)
|
||||
- Microsoft Academic Search Engine (science)
|
||||
- Hoogle (it)
|
||||
- Diggbt (files)
|
||||
- Dictzone (general - dictionary)
|
||||
- Translated (general - translation)
|
||||
- New Plugins
|
||||
|
||||
- Infinite scroll on results page
|
||||
- DOAI rewrite
|
||||
- Full theme redesign
|
||||
- Display the number of results
|
||||
- Filter searches by date range
|
||||
- Instance config API endpoint
|
||||
- Dependency version updates
|
||||
- Socks proxy support for outgoing requests
|
||||
- 404 page
|
||||
|
||||
|
||||
News
|
||||
~~~~
|
||||
|
||||
@kvch joined the maintainer team
|
||||
|
||||
|
||||
0.9.0 2016.05.24
|
||||
================
|
||||
|
||||
|
@ -36,6 +71,7 @@
|
|||
- Multilingual autocompleter
|
||||
- Qwant autocompleter backend
|
||||
|
||||
|
||||
0.8.1 2015.12.22
|
||||
================
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ Installation
|
|||
~~~~~~~~~~~~
|
||||
|
||||
- clone source:
|
||||
``git clone git@github.com:asciimoo/searx.git && cd searx``
|
||||
``git clone https://github.com/asciimoo/searx.git && cd searx``
|
||||
- install dependencies: ``./manage.sh update_packages``
|
||||
- edit your
|
||||
`settings.yml <https://github.com/asciimoo/searx/blob/master/searx/settings.yml>`__
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
|
||||
categories = ['general'] # optional
|
||||
categories = ['general'] # optional
|
||||
|
||||
|
||||
def request(query, params):
|
||||
'''pre-request callback
|
||||
|
@ -22,4 +23,3 @@ def response(resp):
|
|||
resp: requests response object
|
||||
'''
|
||||
return [{'url': '', 'title': '', 'content': ''}]
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
#!/bin/sh
|
||||
|
||||
BASE_DIR=$(dirname `readlink -f $0`)
|
||||
BASE_DIR=$(dirname "`readlink -f "$0"`")
|
||||
PYTHONPATH=$BASE_DIR
|
||||
SEARX_DIR="$BASE_DIR/searx"
|
||||
ACTION=$1
|
||||
|
@ -58,7 +58,8 @@ styles() {
|
|||
build_style themes/courgette/less/style.less themes/courgette/css/style.css
|
||||
build_style themes/courgette/less/style-rtl.less themes/courgette/css/style-rtl.css
|
||||
build_style less/bootstrap/bootstrap.less css/bootstrap.min.css
|
||||
build_style themes/oscar/less/oscar/oscar.less themes/oscar/css/oscar.min.css
|
||||
build_style themes/oscar/less/pointhi/oscar.less themes/oscar/css/pointhi.min.css
|
||||
build_style themes/oscar/less/logicodev/oscar.less themes/oscar/css/logicodev.min.css
|
||||
build_style themes/pix-art/less/style.less themes/pix-art/css/style.css
|
||||
}
|
||||
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
babel==2.2.0
|
||||
mock==1.0.1
|
||||
babel==2.3.4
|
||||
mock==2.0.0
|
||||
nose2[coverage-plugin]
|
||||
pep8==1.7.0
|
||||
plone.testing==4.0.15
|
||||
plone.testing==5.0.0
|
||||
robotframework-selenium2library==1.7.4
|
||||
robotsuite==1.7.0
|
||||
transifex-client==0.11
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
certifi==2015.11.20.1
|
||||
flask==0.10.1
|
||||
flask-babel==0.9
|
||||
lxml==3.5.0
|
||||
ndg-httpsclient==0.4.0
|
||||
certifi==2016.2.28
|
||||
flask==0.11.1
|
||||
flask-babel==0.11.1
|
||||
lxml==3.6.0
|
||||
ndg-httpsclient==0.4.1
|
||||
pyasn1==0.1.9
|
||||
pyasn1-modules==0.0.8
|
||||
pygments==2.0.2
|
||||
pygments==2.1.3
|
||||
pyopenssl==0.15.1
|
||||
python-dateutil==2.4.2
|
||||
python-dateutil==2.5.3
|
||||
pyyaml==3.11
|
||||
requests==2.9.1
|
||||
requests[socks]==2.10.0
|
||||
|
|
|
@ -19,7 +19,7 @@ along with searx. If not, see < http://www.gnu.org/licenses/ >.
|
|||
from os.path import realpath, dirname, splitext, join
|
||||
import sys
|
||||
from imp import load_source
|
||||
from flask.ext.babel import gettext
|
||||
from flask_babel import gettext
|
||||
from operator import itemgetter
|
||||
from searx import settings
|
||||
from searx import logger
|
||||
|
@ -42,7 +42,8 @@ engine_default_args = {'paging': False,
|
|||
'shortcut': '-',
|
||||
'disabled': False,
|
||||
'suspend_end_time': 0,
|
||||
'continuous_errors': 0}
|
||||
'continuous_errors': 0,
|
||||
'time_range_support': False}
|
||||
|
||||
|
||||
def load_module(filename):
|
||||
|
@ -57,7 +58,11 @@ def load_module(filename):
|
|||
|
||||
def load_engine(engine_data):
|
||||
engine_name = engine_data['engine']
|
||||
engine = load_module(engine_name + '.py')
|
||||
try:
|
||||
engine = load_module(engine_name + '.py')
|
||||
except:
|
||||
logger.exception('Cannot load engine "{}"'.format(engine_name))
|
||||
return None
|
||||
|
||||
for param_name in engine_data:
|
||||
if param_name == 'engine':
|
||||
|
@ -199,4 +204,5 @@ if 'engines' not in settings or not settings['engines']:
|
|||
|
||||
for engine_data in settings['engines']:
|
||||
engine = load_engine(engine_data)
|
||||
engines[engine.name] = engine
|
||||
if engine is not None:
|
||||
engines[engine.name] = engine
|
||||
|
|
|
@ -34,6 +34,7 @@ def locale_to_lang_code(locale):
|
|||
locale = locale.split('_')[0]
|
||||
return locale
|
||||
|
||||
|
||||
# wikis for some languages were moved off from the main site, we need to make
|
||||
# requests to correct URLs to be able to get results in those languages
|
||||
lang_urls = {
|
||||
|
@ -70,6 +71,7 @@ def get_lang_urls(language):
|
|||
return lang_urls[language]
|
||||
return lang_urls['all']
|
||||
|
||||
|
||||
# Language names to build search requests for
|
||||
# those languages which are hosted on the main site.
|
||||
main_langs = {
|
||||
|
|
|
@ -16,6 +16,7 @@ from urllib import quote
|
|||
from lxml import html
|
||||
from operator import itemgetter
|
||||
from searx.engines.xpath import extract_text
|
||||
from searx.utils import get_torrent_size
|
||||
|
||||
# engine dependent config
|
||||
categories = ['videos', 'music', 'files']
|
||||
|
@ -68,20 +69,7 @@ def response(resp):
|
|||
leech = 0
|
||||
|
||||
# convert filesize to byte if possible
|
||||
try:
|
||||
filesize = float(filesize)
|
||||
|
||||
# convert filesize to byte
|
||||
if filesize_multiplier == 'TB':
|
||||
filesize = int(filesize * 1024 * 1024 * 1024 * 1024)
|
||||
elif filesize_multiplier == 'GB':
|
||||
filesize = int(filesize * 1024 * 1024 * 1024)
|
||||
elif filesize_multiplier == 'MB':
|
||||
filesize = int(filesize * 1024 * 1024)
|
||||
elif filesize_multiplier == 'KB':
|
||||
filesize = int(filesize * 1024)
|
||||
except:
|
||||
filesize = None
|
||||
filesize = get_torrent_size(filesize, filesize_multiplier)
|
||||
|
||||
# convert files to int if possible
|
||||
if files.isdigit():
|
||||
|
|
|
@ -9,7 +9,7 @@ categories = []
|
|||
url = 'https://download.finance.yahoo.com/d/quotes.csv?e=.csv&f=sl1d1t1&s={query}=X'
|
||||
weight = 100
|
||||
|
||||
parser_re = re.compile(u'.*?(\d+(?:\.\d+)?) ([^.0-9]+) (?:in|to) ([^.0-9]+)', re.I) # noqa
|
||||
parser_re = re.compile(u'.*?(\\d+(?:\\.\\d+)?) ([^.0-9]+) (?:in|to) ([^.0-9]+)', re.I) # noqa
|
||||
|
||||
db = 1
|
||||
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
"""
|
||||
|
||||
from urllib import urlencode
|
||||
from urlparse import urljoin
|
||||
from lxml import html
|
||||
import re
|
||||
from searx.engines.xpath import extract_text
|
||||
|
@ -21,10 +20,16 @@ from searx.engines.xpath import extract_text
|
|||
# engine dependent config
|
||||
categories = ['images']
|
||||
paging = True
|
||||
time_range_support = True
|
||||
|
||||
# search-url
|
||||
base_url = 'https://www.deviantart.com/'
|
||||
search_url = base_url + 'browse/all/?offset={offset}&{query}'
|
||||
time_range_url = '&order={range}'
|
||||
|
||||
time_range_dict = {'day': 11,
|
||||
'week': 14,
|
||||
'month': 15}
|
||||
|
||||
|
||||
# do search-request
|
||||
|
@ -33,6 +38,8 @@ def request(query, params):
|
|||
|
||||
params['url'] = search_url.format(offset=offset,
|
||||
query=urlencode({'q': query}))
|
||||
if params['time_range'] in time_range_dict:
|
||||
params['url'] += time_range_url.format(range=time_range_dict[params['time_range']])
|
||||
|
||||
return params
|
||||
|
||||
|
@ -47,14 +54,13 @@ def response(resp):
|
|||
|
||||
dom = html.fromstring(resp.text)
|
||||
|
||||
regex = re.compile('\/200H\/')
|
||||
regex = re.compile(r'\/200H\/')
|
||||
|
||||
# parse results
|
||||
for result in dom.xpath('//div[contains(@class, "tt-a tt-fh")]'):
|
||||
link = result.xpath('.//a[contains(@class, "thumb")]')[0]
|
||||
url = urljoin(base_url, link.attrib.get('href'))
|
||||
title_links = result.xpath('.//span[@class="details"]//a[contains(@class, "t")]')
|
||||
title = extract_text(title_links[0])
|
||||
for result in dom.xpath('.//span[@class="thumb wide"]'):
|
||||
link = result.xpath('.//a[@class="torpedo-thumb-link"]')[0]
|
||||
url = link.attrib.get('href')
|
||||
title = extract_text(result.xpath('.//span[@class="title"]'))
|
||||
thumbnail_src = link.xpath('.//img')[0].attrib.get('src')
|
||||
img_src = regex.sub('/', thumbnail_src)
|
||||
|
||||
|
|
|
@ -0,0 +1,69 @@
|
|||
"""
|
||||
Dictzone
|
||||
|
||||
@website https://dictzone.com/
|
||||
@provide-api no
|
||||
@using-api no
|
||||
@results HTML (using search portal)
|
||||
@stable no (HTML can change)
|
||||
@parse url, title, content
|
||||
"""
|
||||
|
||||
import re
|
||||
from urlparse import urljoin
|
||||
from lxml import html
|
||||
from cgi import escape
|
||||
from searx.utils import is_valid_lang
|
||||
|
||||
categories = ['general']
|
||||
url = u'http://dictzone.com/{from_lang}-{to_lang}-dictionary/{query}'
|
||||
weight = 100
|
||||
|
||||
parser_re = re.compile(u'.*?([a-z]+)-([a-z]+) ([^ ]+)$', re.I)
|
||||
results_xpath = './/table[@id="r"]/tr'
|
||||
|
||||
|
||||
def request(query, params):
|
||||
m = parser_re.match(unicode(query, 'utf8'))
|
||||
if not m:
|
||||
return params
|
||||
|
||||
from_lang, to_lang, query = m.groups()
|
||||
|
||||
from_lang = is_valid_lang(from_lang)
|
||||
to_lang = is_valid_lang(to_lang)
|
||||
|
||||
if not from_lang or not to_lang:
|
||||
return params
|
||||
|
||||
params['url'] = url.format(from_lang=from_lang[2],
|
||||
to_lang=to_lang[2],
|
||||
query=query)
|
||||
|
||||
return params
|
||||
|
||||
|
||||
def response(resp):
|
||||
results = []
|
||||
|
||||
dom = html.fromstring(resp.text)
|
||||
|
||||
for k, result in enumerate(dom.xpath(results_xpath)[1:]):
|
||||
try:
|
||||
from_result, to_results_raw = result.xpath('./td')
|
||||
except:
|
||||
continue
|
||||
|
||||
to_results = []
|
||||
for to_result in to_results_raw.xpath('./p/a'):
|
||||
t = to_result.text_content()
|
||||
if t.strip():
|
||||
to_results.append(to_result.text_content())
|
||||
|
||||
results.append({
|
||||
'url': urljoin(resp.url, '?%d' % k),
|
||||
'title': escape(from_result.text_content()),
|
||||
'content': escape('; '.join(to_results))
|
||||
})
|
||||
|
||||
return results
|
|
@ -0,0 +1,58 @@
|
|||
"""
|
||||
DigBT (Videos, Music, Files)
|
||||
|
||||
@website https://digbt.org
|
||||
@provide-api no
|
||||
|
||||
@using-api no
|
||||
@results HTML (using search portal)
|
||||
@stable no (HTML can change)
|
||||
@parse url, title, content, magnetlink
|
||||
"""
|
||||
|
||||
from urlparse import urljoin
|
||||
from lxml import html
|
||||
from searx.engines.xpath import extract_text
|
||||
from searx.utils import get_torrent_size
|
||||
|
||||
categories = ['videos', 'music', 'files']
|
||||
paging = True
|
||||
|
||||
URL = 'https://digbt.org'
|
||||
SEARCH_URL = URL + '/search/{query}-time-{pageno}'
|
||||
FILESIZE = 3
|
||||
FILESIZE_MULTIPLIER = 4
|
||||
|
||||
|
||||
def request(query, params):
|
||||
params['url'] = SEARCH_URL.format(query=query, pageno=params['pageno'])
|
||||
|
||||
return params
|
||||
|
||||
|
||||
def response(resp):
|
||||
dom = html.fromstring(resp.content)
|
||||
search_res = dom.xpath('.//td[@class="x-item"]')
|
||||
|
||||
if not search_res:
|
||||
return list()
|
||||
|
||||
results = list()
|
||||
for result in search_res:
|
||||
url = urljoin(URL, result.xpath('.//a[@title]/@href')[0])
|
||||
title = result.xpath('.//a[@title]/text()')[0]
|
||||
content = extract_text(result.xpath('.//div[@class="files"]'))
|
||||
files_data = extract_text(result.xpath('.//div[@class="tail"]')).split()
|
||||
filesize = get_torrent_size(files_data[FILESIZE], files_data[FILESIZE_MULTIPLIER])
|
||||
magnetlink = result.xpath('.//div[@class="tail"]//a[@class="title"]/@href')[0]
|
||||
|
||||
results.append({'url': url,
|
||||
'title': title,
|
||||
'content': content,
|
||||
'filesize': filesize,
|
||||
'magnetlink': magnetlink,
|
||||
'seed': 'N/A',
|
||||
'leech': 'N/A',
|
||||
'template': 'torrent.html'})
|
||||
|
||||
return results
|
|
@ -11,21 +11,26 @@
|
|||
@parse url, title, content
|
||||
|
||||
@todo rewrite to api
|
||||
@todo language support
|
||||
(the current used site does not support language-change)
|
||||
"""
|
||||
|
||||
from urllib import urlencode
|
||||
from lxml.html import fromstring
|
||||
from searx.engines.xpath import extract_text
|
||||
from searx.languages import language_codes
|
||||
|
||||
# engine dependent config
|
||||
categories = ['general']
|
||||
paging = True
|
||||
language_support = True
|
||||
time_range_support = True
|
||||
|
||||
# search-url
|
||||
url = 'https://duckduckgo.com/html?{query}&s={offset}'
|
||||
time_range_url = '&df={range}'
|
||||
|
||||
time_range_dict = {'day': 'd',
|
||||
'week': 'w',
|
||||
'month': 'm'}
|
||||
|
||||
# specific xpath variables
|
||||
result_xpath = '//div[@class="result results_links results_links_deep web-result "]' # noqa
|
||||
|
@ -39,13 +44,31 @@ def request(query, params):
|
|||
offset = (params['pageno'] - 1) * 30
|
||||
|
||||
if params['language'] == 'all':
|
||||
locale = 'en-us'
|
||||
locale = None
|
||||
else:
|
||||
locale = params['language'].replace('_', '-').lower()
|
||||
locale = params['language'].split('_')
|
||||
if len(locale) == 2:
|
||||
# country code goes first
|
||||
locale = locale[1].lower() + '-' + locale[0].lower()
|
||||
else:
|
||||
# tries to get a country code from language
|
||||
locale = locale[0].lower()
|
||||
lang_codes = [x[0] for x in language_codes]
|
||||
for lc in lang_codes:
|
||||
lc = lc.split('_')
|
||||
if locale == lc[0]:
|
||||
locale = lc[1].lower() + '-' + lc[0].lower()
|
||||
break
|
||||
|
||||
params['url'] = url.format(
|
||||
query=urlencode({'q': query, 'kl': locale}),
|
||||
offset=offset)
|
||||
if locale:
|
||||
params['url'] = url.format(
|
||||
query=urlencode({'q': query, 'kl': locale}), offset=offset)
|
||||
else:
|
||||
params['url'] = url.format(
|
||||
query=urlencode({'q': query}), offset=offset)
|
||||
|
||||
if params['time_range'] in time_range_dict:
|
||||
params['url'] += time_range_url.format(range=time_range_dict[params['time_range']])
|
||||
|
||||
return params
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ paging = True
|
|||
|
||||
|
||||
class FilecropResultParser(HTMLParser):
|
||||
|
||||
def __init__(self):
|
||||
HTMLParser.__init__(self)
|
||||
self.__start_processing = False
|
||||
|
|
|
@ -24,6 +24,7 @@ categories = ['general']
|
|||
paging = True
|
||||
language_support = True
|
||||
use_locale_domain = True
|
||||
time_range_support = True
|
||||
|
||||
# based on https://en.wikipedia.org/wiki/List_of_Google_domains and tests
|
||||
default_hostname = 'www.google.com'
|
||||
|
@ -92,6 +93,11 @@ search_url = ('https://{hostname}' +
|
|||
search_path +
|
||||
'?{query}&start={offset}&gws_rd=cr&gbv=1&lr={lang}&ei=x')
|
||||
|
||||
time_range_search = "&tbs=qdr:{range}"
|
||||
time_range_dict = {'day': 'd',
|
||||
'week': 'w',
|
||||
'month': 'm'}
|
||||
|
||||
# other URLs
|
||||
map_hostname_start = 'maps.google.'
|
||||
maps_path = '/maps'
|
||||
|
@ -179,6 +185,8 @@ def request(query, params):
|
|||
query=urlencode({'q': query}),
|
||||
hostname=google_hostname,
|
||||
lang=url_lang)
|
||||
if params['time_range'] in time_range_dict:
|
||||
params['url'] += time_range_search.format(range=time_range_dict[params['time_range']])
|
||||
|
||||
params['headers']['Accept-Language'] = language
|
||||
params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'
|
||||
|
@ -300,9 +308,9 @@ def parse_map_detail(parsed_url, result, google_hostname):
|
|||
results = []
|
||||
|
||||
# try to parse the geoloc
|
||||
m = re.search('@([0-9\.]+),([0-9\.]+),([0-9]+)', parsed_url.path)
|
||||
m = re.search(r'@([0-9\.]+),([0-9\.]+),([0-9]+)', parsed_url.path)
|
||||
if m is None:
|
||||
m = re.search('ll\=([0-9\.]+),([0-9\.]+)\&z\=([0-9]+)', parsed_url.query)
|
||||
m = re.search(r'll\=([0-9\.]+),([0-9\.]+)\&z\=([0-9]+)', parsed_url.query)
|
||||
|
||||
if m is not None:
|
||||
# geoloc found (ignored)
|
||||
|
|
|
@ -11,7 +11,6 @@
|
|||
"""
|
||||
|
||||
from urllib import urlencode
|
||||
from urlparse import parse_qs
|
||||
from json import loads
|
||||
from lxml import html
|
||||
|
||||
|
@ -19,24 +18,38 @@ from lxml import html
|
|||
categories = ['images']
|
||||
paging = True
|
||||
safesearch = True
|
||||
time_range_support = True
|
||||
number_of_results = 100
|
||||
|
||||
search_url = 'https://www.google.com/search'\
|
||||
'?{query}'\
|
||||
'&asearch=ichunk'\
|
||||
'&async=_id:rg_s,_pms:s'\
|
||||
'&tbm=isch'\
|
||||
'&ijn=1'\
|
||||
'&start={offset}'
|
||||
'&yv=2'\
|
||||
'&{search_options}'
|
||||
time_range_attr = "qdr:{range}"
|
||||
time_range_dict = {'day': 'd',
|
||||
'week': 'w',
|
||||
'month': 'm'}
|
||||
|
||||
|
||||
# do search-request
|
||||
def request(query, params):
|
||||
offset = (params['pageno'] - 1) * 100
|
||||
|
||||
params['url'] = search_url.format(query=urlencode({'q': query}),
|
||||
offset=offset,
|
||||
safesearch=safesearch)
|
||||
search_options = {
|
||||
'ijn': params['pageno'] - 1,
|
||||
'start': (params['pageno'] - 1) * number_of_results
|
||||
}
|
||||
|
||||
if params['time_range'] in time_range_dict:
|
||||
search_options['tbs'] = time_range_attr.format(range=time_range_dict[params['time_range']])
|
||||
|
||||
if safesearch and params['safesearch']:
|
||||
params['url'] += '&' + urlencode({'safe': 'active'})
|
||||
search_options['safe'] = 'on'
|
||||
|
||||
params['url'] = search_url.format(query=urlencode({'q': query}),
|
||||
search_options=urlencode(search_options))
|
||||
|
||||
return params
|
||||
|
||||
|
@ -45,12 +58,17 @@ def request(query, params):
|
|||
def response(resp):
|
||||
results = []
|
||||
|
||||
dom = html.fromstring(resp.text)
|
||||
g_result = loads(resp.text)
|
||||
|
||||
dom = html.fromstring(g_result[1][1])
|
||||
|
||||
# parse results
|
||||
for result in dom.xpath('//div[@data-ved]'):
|
||||
|
||||
metadata = loads(result.xpath('./div[@class="rg_meta"]/text()')[0])
|
||||
try:
|
||||
metadata = loads(''.join(result.xpath('./div[@class="rg_meta"]/text()')))
|
||||
except:
|
||||
continue
|
||||
|
||||
thumbnail_src = metadata['tu']
|
||||
|
||||
|
|
|
@ -0,0 +1,83 @@
|
|||
# INA (Videos)
|
||||
#
|
||||
# @website https://www.ina.fr/
|
||||
# @provide-api no
|
||||
#
|
||||
# @using-api no
|
||||
# @results HTML (using search portal)
|
||||
# @stable no (HTML can change)
|
||||
# @parse url, title, content, publishedDate, thumbnail
|
||||
#
|
||||
# @todo set content-parameter with correct data
|
||||
# @todo embedded (needs some md5 from video page)
|
||||
|
||||
from json import loads
|
||||
from urllib import urlencode
|
||||
from lxml import html
|
||||
from HTMLParser import HTMLParser
|
||||
from searx.engines.xpath import extract_text
|
||||
from dateutil import parser
|
||||
|
||||
# engine dependent config
|
||||
categories = ['videos']
|
||||
paging = True
|
||||
page_size = 48
|
||||
|
||||
# search-url
|
||||
base_url = 'https://www.ina.fr'
|
||||
search_url = base_url + '/layout/set/ajax/recherche/result?autopromote=&hf={ps}&b={start}&type=Video&r=&{query}'
|
||||
|
||||
# specific xpath variables
|
||||
results_xpath = '//div[contains(@class,"search-results--list")]/div[@class="media"]'
|
||||
url_xpath = './/a/@href'
|
||||
title_xpath = './/h3[@class="h3--title media-heading"]'
|
||||
thumbnail_xpath = './/img/@src'
|
||||
publishedDate_xpath = './/span[@class="broadcast"]'
|
||||
content_xpath = './/p[@class="media-body__summary"]'
|
||||
|
||||
|
||||
# do search-request
|
||||
def request(query, params):
|
||||
params['url'] = search_url.format(ps=page_size,
|
||||
start=params['pageno'] * page_size,
|
||||
query=urlencode({'q': query}))
|
||||
|
||||
return params
|
||||
|
||||
|
||||
# get response from search-request
|
||||
def response(resp):
|
||||
results = []
|
||||
|
||||
# we get html in a JSON container...
|
||||
response = loads(resp.text)
|
||||
if "content" not in response:
|
||||
return []
|
||||
dom = html.fromstring(response["content"])
|
||||
p = HTMLParser()
|
||||
|
||||
# parse results
|
||||
for result in dom.xpath(results_xpath):
|
||||
videoid = result.xpath(url_xpath)[0]
|
||||
url = base_url + videoid
|
||||
title = p.unescape(extract_text(result.xpath(title_xpath)))
|
||||
thumbnail = extract_text(result.xpath(thumbnail_xpath)[0])
|
||||
if thumbnail[0] == '/':
|
||||
thumbnail = base_url + thumbnail
|
||||
d = extract_text(result.xpath(publishedDate_xpath)[0])
|
||||
d = d.split('/')
|
||||
# force ISO date to avoid wrong parsing
|
||||
d = "%s-%s-%s" % (d[2], d[1], d[0])
|
||||
publishedDate = parser.parse(d)
|
||||
content = extract_text(result.xpath(content_xpath))
|
||||
|
||||
# append result
|
||||
results.append({'url': url,
|
||||
'title': title,
|
||||
'content': content,
|
||||
'template': 'videos.html',
|
||||
'publishedDate': publishedDate,
|
||||
'thumbnail': thumbnail})
|
||||
|
||||
# return results
|
||||
return results
|
|
@ -6,7 +6,16 @@ search_url = None
|
|||
url_query = None
|
||||
content_query = None
|
||||
title_query = None
|
||||
# suggestion_xpath = ''
|
||||
suggestion_query = ''
|
||||
results_query = ''
|
||||
|
||||
# parameters for engines with paging support
|
||||
#
|
||||
# number of results on each page
|
||||
# (only needed if the site requires not a page number, but an offset)
|
||||
page_size = 1
|
||||
# number of the first page (usually 0 or 1)
|
||||
first_page_num = 1
|
||||
|
||||
|
||||
def iterate(iterable):
|
||||
|
@ -69,19 +78,36 @@ def query(data, query_string):
|
|||
|
||||
def request(query, params):
|
||||
query = urlencode({'q': query})[2:]
|
||||
params['url'] = search_url.format(query=query)
|
||||
|
||||
fp = {'query': query}
|
||||
if paging and search_url.find('{pageno}') >= 0:
|
||||
fp['pageno'] = (params['pageno'] - 1) * page_size + first_page_num
|
||||
|
||||
params['url'] = search_url.format(**fp)
|
||||
params['query'] = query
|
||||
|
||||
return params
|
||||
|
||||
|
||||
def response(resp):
|
||||
results = []
|
||||
|
||||
json = loads(resp.text)
|
||||
if results_query:
|
||||
for result in query(json, results_query)[0]:
|
||||
url = query(result, url_query)[0]
|
||||
title = query(result, title_query)[0]
|
||||
content = query(result, content_query)[0]
|
||||
results.append({'url': url, 'title': title, 'content': content})
|
||||
else:
|
||||
for url, title, content in zip(
|
||||
query(json, url_query),
|
||||
query(json, title_query),
|
||||
query(json, content_query)
|
||||
):
|
||||
results.append({'url': url, 'title': title, 'content': content})
|
||||
|
||||
urls = query(json, url_query)
|
||||
contents = query(json, content_query)
|
||||
titles = query(json, title_query)
|
||||
for url, title, content in zip(urls, titles, contents):
|
||||
results.append({'url': url, 'title': title, 'content': content})
|
||||
if not suggestion_query:
|
||||
return results
|
||||
for suggestion in query(json, suggestion_query):
|
||||
results.append({'suggestion': suggestion})
|
||||
return results
|
||||
|
|
|
@ -0,0 +1,78 @@
|
|||
"""
|
||||
ScanR Structures (Science)
|
||||
|
||||
@website https://scanr.enseignementsup-recherche.gouv.fr
|
||||
@provide-api yes (https://scanr.enseignementsup-recherche.gouv.fr/api/swagger-ui.html)
|
||||
|
||||
@using-api yes
|
||||
@results JSON
|
||||
@stable yes
|
||||
@parse url, title, content, img_src
|
||||
"""
|
||||
|
||||
from urllib import urlencode
|
||||
from json import loads, dumps
|
||||
from dateutil import parser
|
||||
from searx.utils import html_to_text
|
||||
|
||||
# engine dependent config
|
||||
categories = ['science']
|
||||
paging = True
|
||||
page_size = 20
|
||||
|
||||
# search-url
|
||||
url = 'https://scanr.enseignementsup-recherche.gouv.fr/'
|
||||
search_url = url + 'api/structures/search'
|
||||
|
||||
|
||||
# do search-request
|
||||
def request(query, params):
|
||||
|
||||
params['url'] = search_url
|
||||
params['method'] = 'POST'
|
||||
params['headers']['Content-type'] = "application/json"
|
||||
params['data'] = dumps({"query": query,
|
||||
"searchField": "ALL",
|
||||
"sortDirection": "ASC",
|
||||
"sortOrder": "RELEVANCY",
|
||||
"page": params['pageno'],
|
||||
"pageSize": page_size})
|
||||
|
||||
return params
|
||||
|
||||
|
||||
# get response from search-request
|
||||
def response(resp):
|
||||
results = []
|
||||
|
||||
search_res = loads(resp.text)
|
||||
|
||||
# return empty array if there are no results
|
||||
if search_res.get('total') < 1:
|
||||
return []
|
||||
|
||||
# parse results
|
||||
for result in search_res['results']:
|
||||
if 'id' not in result:
|
||||
continue
|
||||
|
||||
# is it thumbnail or img_src??
|
||||
thumbnail = None
|
||||
if 'logo' in result:
|
||||
thumbnail = result['logo']
|
||||
if thumbnail[0] == '/':
|
||||
thumbnail = url + thumbnail
|
||||
|
||||
content = None
|
||||
if 'highlights' in result:
|
||||
content = result['highlights'][0]['value']
|
||||
|
||||
# append result
|
||||
results.append({'url': url + 'structure/' + result['id'],
|
||||
'title': result['label'],
|
||||
# 'thumbnail': thumbnail,
|
||||
'img_src': thumbnail,
|
||||
'content': html_to_text(content)})
|
||||
|
||||
# return results
|
||||
return results
|
|
@ -57,6 +57,7 @@ def get_client_id():
|
|||
logger.warning("Unable to fetch guest client_id from SoundCloud, check parser!")
|
||||
return ""
|
||||
|
||||
|
||||
# api-key
|
||||
guest_client_id = get_client_id()
|
||||
|
||||
|
|
|
@ -68,15 +68,15 @@ def response(resp):
|
|||
url = link.attrib.get('href')
|
||||
|
||||
# block google-ad url's
|
||||
if re.match("^http(s|)://(www\.)?google\.[a-z]+/aclk.*$", url):
|
||||
if re.match(r"^http(s|)://(www\.)?google\.[a-z]+/aclk.*$", url):
|
||||
continue
|
||||
|
||||
# block startpage search url's
|
||||
if re.match("^http(s|)://(www\.)?startpage\.com/do/search\?.*$", url):
|
||||
if re.match(r"^http(s|)://(www\.)?startpage\.com/do/search\?.*$", url):
|
||||
continue
|
||||
|
||||
# block ixquick search url's
|
||||
if re.match("^http(s|)://(www\.)?ixquick\.com/do/search\?.*$", url):
|
||||
if re.match(r"^http(s|)://(www\.)?ixquick\.com/do/search\?.*$", url):
|
||||
continue
|
||||
|
||||
title = escape(extract_text(link))
|
||||
|
@ -89,7 +89,7 @@ def response(resp):
|
|||
published_date = None
|
||||
|
||||
# check if search result starts with something like: "2 Sep 2014 ... "
|
||||
if re.match("^([1-9]|[1-2][0-9]|3[0-1]) [A-Z][a-z]{2} [0-9]{4} \.\.\. ", content):
|
||||
if re.match(r"^([1-9]|[1-2][0-9]|3[0-1]) [A-Z][a-z]{2} [0-9]{4} \.\.\. ", content):
|
||||
date_pos = content.find('...') + 4
|
||||
date_string = content[0:date_pos - 5]
|
||||
published_date = parser.parse(date_string, dayfirst=True)
|
||||
|
@ -98,7 +98,7 @@ def response(resp):
|
|||
content = content[date_pos:]
|
||||
|
||||
# check if search result starts with something like: "5 days ago ... "
|
||||
elif re.match("^[0-9]+ days? ago \.\.\. ", content):
|
||||
elif re.match(r"^[0-9]+ days? ago \.\.\. ", content):
|
||||
date_pos = content.find('...') + 4
|
||||
date_string = content[0:date_pos - 5]
|
||||
|
||||
|
|
|
@ -25,10 +25,10 @@ base_url = 'https://swisscows.ch/'
|
|||
search_string = '?{query}&page={page}'
|
||||
|
||||
# regex
|
||||
regex_json = re.compile('initialData: {"Request":(.|\n)*},\s*environment')
|
||||
regex_json_remove_start = re.compile('^initialData:\s*')
|
||||
regex_json_remove_end = re.compile(',\s*environment$')
|
||||
regex_img_url_remove_start = re.compile('^https?://i\.swisscows\.ch/\?link=')
|
||||
regex_json = re.compile(r'initialData: {"Request":(.|\n)*},\s*environment')
|
||||
regex_json_remove_start = re.compile(r'^initialData:\s*')
|
||||
regex_json_remove_end = re.compile(r',\s*environment$')
|
||||
regex_img_url_remove_start = re.compile(r'^https?://i\.swisscows\.ch/\?link=')
|
||||
|
||||
|
||||
# do search-request
|
||||
|
|
|
@ -48,7 +48,7 @@ def response(resp):
|
|||
return []
|
||||
|
||||
# regular expression for parsing torrent size strings
|
||||
size_re = re.compile('Size:\s*([\d.]+)(TB|GB|MB|B)', re.IGNORECASE)
|
||||
size_re = re.compile(r'Size:\s*([\d.]+)(TB|GB|MB|B)', re.IGNORECASE)
|
||||
|
||||
# processing the results, two rows at a time
|
||||
for i in xrange(0, len(rows), 2):
|
||||
|
|
|
@ -0,0 +1,65 @@
|
|||
"""
|
||||
MyMemory Translated
|
||||
|
||||
@website https://mymemory.translated.net/
|
||||
@provide-api yes (https://mymemory.translated.net/doc/spec.php)
|
||||
@using-api yes
|
||||
@results JSON
|
||||
@stable yes
|
||||
@parse url, title, content
|
||||
"""
|
||||
import re
|
||||
from cgi import escape
|
||||
from searx.utils import is_valid_lang
|
||||
|
||||
categories = ['general']
|
||||
url = u'http://api.mymemory.translated.net/get?q={query}&langpair={from_lang}|{to_lang}{key}'
|
||||
web_url = u'http://mymemory.translated.net/en/{from_lang}/{to_lang}/{query}'
|
||||
weight = 100
|
||||
|
||||
parser_re = re.compile(u'.*?([a-z]+)-([a-z]+) (.{2,})$', re.I)
|
||||
api_key = ''
|
||||
|
||||
|
||||
def request(query, params):
|
||||
m = parser_re.match(unicode(query, 'utf8'))
|
||||
if not m:
|
||||
return params
|
||||
|
||||
from_lang, to_lang, query = m.groups()
|
||||
|
||||
from_lang = is_valid_lang(from_lang)
|
||||
to_lang = is_valid_lang(to_lang)
|
||||
|
||||
if not from_lang or not to_lang:
|
||||
return params
|
||||
|
||||
if api_key:
|
||||
key_form = '&key=' + api_key
|
||||
else:
|
||||
key_form = ''
|
||||
params['url'] = url.format(from_lang=from_lang[1],
|
||||
to_lang=to_lang[1],
|
||||
query=query,
|
||||
key=key_form)
|
||||
params['query'] = query
|
||||
params['from_lang'] = from_lang
|
||||
params['to_lang'] = to_lang
|
||||
|
||||
return params
|
||||
|
||||
|
||||
def response(resp):
|
||||
results = []
|
||||
results.append({
|
||||
'url': escape(web_url.format(
|
||||
from_lang=resp.search_params['from_lang'][2],
|
||||
to_lang=resp.search_params['to_lang'][2],
|
||||
query=resp.search_params['query'])),
|
||||
'title': escape('[{0}-{1}] {2}'.format(
|
||||
resp.search_params['from_lang'][1],
|
||||
resp.search_params['to_lang'][1],
|
||||
resp.search_params['query'])),
|
||||
'content': escape(resp.json()['responseData']['translatedText'])
|
||||
})
|
||||
return results
|
|
@ -1,56 +1,86 @@
|
|||
import json
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Wikidata
|
||||
|
||||
@website https://wikidata.org
|
||||
@provide-api yes (https://wikidata.org/w/api.php)
|
||||
|
||||
@using-api partially (most things require scraping)
|
||||
@results JSON, HTML
|
||||
@stable no (html can change)
|
||||
@parse url, infobox
|
||||
"""
|
||||
|
||||
from searx import logger
|
||||
from searx.poolrequests import get
|
||||
from searx.utils import format_date_by_locale
|
||||
from searx.engines.xpath import extract_text
|
||||
|
||||
from datetime import datetime
|
||||
from dateutil.parser import parse as dateutil_parse
|
||||
from json import loads
|
||||
from lxml.html import fromstring
|
||||
from urllib import urlencode
|
||||
|
||||
|
||||
logger = logger.getChild('wikidata')
|
||||
result_count = 1
|
||||
|
||||
# urls
|
||||
wikidata_host = 'https://www.wikidata.org'
|
||||
url_search = wikidata_host \
|
||||
+ '/wiki/Special:ItemDisambiguation?{query}'
|
||||
|
||||
wikidata_api = wikidata_host + '/w/api.php'
|
||||
url_search = wikidata_api \
|
||||
+ '?action=query&list=search&format=json'\
|
||||
+ '&srnamespace=0&srprop=sectiontitle&{query}'
|
||||
url_detail = wikidata_api\
|
||||
+ '?action=wbgetentities&format=json'\
|
||||
+ '&props=labels%7Cinfo%7Csitelinks'\
|
||||
+ '%7Csitelinks%2Furls%7Cdescriptions%7Cclaims'\
|
||||
+ '&{query}'
|
||||
+ '?action=parse&format=json&{query}'\
|
||||
+ '&redirects=1&prop=text%7Cdisplaytitle%7Clanglinks%7Crevid'\
|
||||
+ '&disableeditsection=1&disabletidy=1&preview=1§ionpreview=1&disabletoc=1&utf8=1&formatversion=2'
|
||||
|
||||
url_map = 'https://www.openstreetmap.org/'\
|
||||
+ '?lat={latitude}&lon={longitude}&zoom={zoom}&layers=M'
|
||||
url_image = 'https://commons.wikimedia.org/wiki/Special:FilePath/{filename}?width=500&height=400'
|
||||
|
||||
# xpaths
|
||||
wikidata_ids_xpath = '//div/ul[@class="wikibase-disambiguation"]/li/a/@title'
|
||||
title_xpath = '//*[contains(@class,"wikibase-title-label")]'
|
||||
description_xpath = '//div[contains(@class,"wikibase-entitytermsview-heading-description")]'
|
||||
property_xpath = '//div[@id="{propertyid}"]'
|
||||
label_xpath = './/div[contains(@class,"wikibase-statementgroupview-property-label")]/a'
|
||||
url_xpath = './/a[contains(@class,"external free") or contains(@class, "wb-external-id")]'
|
||||
wikilink_xpath = './/ul[contains(@class,"wikibase-sitelinklistview-listview")]'\
|
||||
+ '/li[contains(@data-wb-siteid,"{wikiid}")]//a/@href'
|
||||
property_row_xpath = './/div[contains(@class,"wikibase-statementview")]'
|
||||
preferred_rank_xpath = './/span[contains(@class,"wikibase-rankselector-preferred")]'
|
||||
value_xpath = './/div[contains(@class,"wikibase-statementview-mainsnak")]'\
|
||||
+ '/*/div[contains(@class,"wikibase-snakview-value")]'
|
||||
language_fallback_xpath = '//sup[contains(@class,"wb-language-fallback-indicator")]'
|
||||
calendar_name_xpath = './/sup[contains(@class,"wb-calendar-name")]'
|
||||
|
||||
|
||||
def request(query, params):
|
||||
language = params['language'].split('_')[0]
|
||||
if language == 'all':
|
||||
language = 'en'
|
||||
|
||||
params['url'] = url_search.format(
|
||||
query=urlencode({'srsearch': query,
|
||||
'srlimit': result_count}))
|
||||
query=urlencode({'label': query,
|
||||
'language': language}))
|
||||
return params
|
||||
|
||||
|
||||
def response(resp):
|
||||
results = []
|
||||
search_res = json.loads(resp.text)
|
||||
|
||||
wikidata_ids = set()
|
||||
for r in search_res.get('query', {}).get('search', {}):
|
||||
wikidata_ids.add(r.get('title', ''))
|
||||
html = fromstring(resp.content)
|
||||
wikidata_ids = html.xpath(wikidata_ids_xpath)
|
||||
|
||||
language = resp.search_params['language'].split('_')[0]
|
||||
if language == 'all':
|
||||
language = 'en'
|
||||
|
||||
url = url_detail.format(query=urlencode({'ids': '|'.join(wikidata_ids),
|
||||
'languages': language + '|en'}))
|
||||
|
||||
htmlresponse = get(url)
|
||||
jsonresponse = json.loads(htmlresponse.content)
|
||||
for wikidata_id in wikidata_ids:
|
||||
results = results + getDetail(jsonresponse, wikidata_id, language, resp.search_params['language'])
|
||||
# TODO: make requests asynchronous to avoid timeout when result_count > 1
|
||||
for wikidata_id in wikidata_ids[:result_count]:
|
||||
url = url_detail.format(query=urlencode({'page': wikidata_id,
|
||||
'uselang': language}))
|
||||
htmlresponse = get(url)
|
||||
jsonresponse = loads(htmlresponse.content)
|
||||
results += getDetail(jsonresponse, wikidata_id, language, resp.search_params['language'])
|
||||
|
||||
return results
|
||||
|
||||
|
@ -60,124 +90,206 @@ def getDetail(jsonresponse, wikidata_id, language, locale):
|
|||
urls = []
|
||||
attributes = []
|
||||
|
||||
result = jsonresponse.get('entities', {}).get(wikidata_id, {})
|
||||
title = jsonresponse.get('parse', {}).get('displaytitle', {})
|
||||
result = jsonresponse.get('parse', {}).get('text', {})
|
||||
|
||||
title = result.get('labels', {}).get(language, {}).get('value', None)
|
||||
if title is None:
|
||||
title = result.get('labels', {}).get('en', {}).get('value', None)
|
||||
if title is None:
|
||||
if not title or not result:
|
||||
return results
|
||||
|
||||
description = result\
|
||||
.get('descriptions', {})\
|
||||
.get(language, {})\
|
||||
.get('value', None)
|
||||
title = fromstring(title)
|
||||
for elem in title.xpath(language_fallback_xpath):
|
||||
elem.getparent().remove(elem)
|
||||
title = extract_text(title.xpath(title_xpath))
|
||||
|
||||
if description is None:
|
||||
description = result\
|
||||
.get('descriptions', {})\
|
||||
.get('en', {})\
|
||||
.get('value', '')
|
||||
result = fromstring(result)
|
||||
for elem in result.xpath(language_fallback_xpath):
|
||||
elem.getparent().remove(elem)
|
||||
|
||||
claims = result.get('claims', {})
|
||||
official_website = get_string(claims, 'P856', None)
|
||||
if official_website is not None:
|
||||
urls.append({'title': 'Official site', 'url': official_website})
|
||||
results.append({'title': title, 'url': official_website})
|
||||
description = extract_text(result.xpath(description_xpath))
|
||||
|
||||
# URLS
|
||||
|
||||
# official website
|
||||
add_url(urls, result, 'P856', results=results)
|
||||
|
||||
# wikipedia
|
||||
wikipedia_link_count = 0
|
||||
wikipedia_link = get_wikilink(result, language + 'wiki')
|
||||
wikipedia_link_count += add_url(urls,
|
||||
'Wikipedia (' + language + ')',
|
||||
wikipedia_link)
|
||||
if wikipedia_link:
|
||||
wikipedia_link_count += 1
|
||||
urls.append({'title': 'Wikipedia (' + language + ')',
|
||||
'url': wikipedia_link})
|
||||
|
||||
if language != 'en':
|
||||
wikipedia_en_link = get_wikilink(result, 'enwiki')
|
||||
wikipedia_link_count += add_url(urls,
|
||||
'Wikipedia (en)',
|
||||
wikipedia_en_link)
|
||||
if wikipedia_link_count == 0:
|
||||
misc_language = get_wiki_firstlanguage(result, 'wiki')
|
||||
if misc_language is not None:
|
||||
add_url(urls,
|
||||
'Wikipedia (' + misc_language + ')',
|
||||
get_wikilink(result, misc_language + 'wiki'))
|
||||
if wikipedia_en_link:
|
||||
wikipedia_link_count += 1
|
||||
urls.append({'title': 'Wikipedia (en)',
|
||||
'url': wikipedia_en_link})
|
||||
|
||||
if language != 'en':
|
||||
add_url(urls,
|
||||
'Wiki voyage (' + language + ')',
|
||||
get_wikilink(result, language + 'wikivoyage'))
|
||||
# TODO: get_wiki_firstlanguage
|
||||
# if wikipedia_link_count == 0:
|
||||
|
||||
add_url(urls,
|
||||
'Wiki voyage (en)',
|
||||
get_wikilink(result, 'enwikivoyage'))
|
||||
# more wikis
|
||||
add_url(urls, result, default_label='Wikivoyage (' + language + ')', link_type=language + 'wikivoyage')
|
||||
add_url(urls, result, default_label='Wikiquote (' + language + ')', link_type=language + 'wikiquote')
|
||||
add_url(urls, result, default_label='Wikimedia Commons', link_type='commonswiki')
|
||||
|
||||
if language != 'en':
|
||||
add_url(urls,
|
||||
'Wikiquote (' + language + ')',
|
||||
get_wikilink(result, language + 'wikiquote'))
|
||||
add_url(urls, result, 'P625', 'OpenStreetMap', link_type='geo')
|
||||
|
||||
add_url(urls,
|
||||
'Wikiquote (en)',
|
||||
get_wikilink(result, 'enwikiquote'))
|
||||
# musicbrainz
|
||||
add_url(urls, result, 'P434', 'MusicBrainz', 'http://musicbrainz.org/artist/')
|
||||
add_url(urls, result, 'P435', 'MusicBrainz', 'http://musicbrainz.org/work/')
|
||||
add_url(urls, result, 'P436', 'MusicBrainz', 'http://musicbrainz.org/release-group/')
|
||||
add_url(urls, result, 'P966', 'MusicBrainz', 'http://musicbrainz.org/label/')
|
||||
|
||||
add_url(urls,
|
||||
'Commons wiki',
|
||||
get_wikilink(result, 'commonswiki'))
|
||||
# IMDb
|
||||
add_url(urls, result, 'P345', 'IMDb', 'https://www.imdb.com/', link_type='imdb')
|
||||
# source code repository
|
||||
add_url(urls, result, 'P1324')
|
||||
# blog
|
||||
add_url(urls, result, 'P1581')
|
||||
# social media links
|
||||
add_url(urls, result, 'P2397', 'YouTube', 'https://www.youtube.com/channel/')
|
||||
add_url(urls, result, 'P1651', 'YouTube', 'https://www.youtube.com/watch?v=')
|
||||
add_url(urls, result, 'P2002', 'Twitter', 'https://twitter.com/')
|
||||
add_url(urls, result, 'P2013', 'Facebook', 'https://facebook.com/')
|
||||
add_url(urls, result, 'P2003', 'Instagram', 'https://instagram.com/')
|
||||
|
||||
add_url(urls,
|
||||
'Location',
|
||||
get_geolink(claims, 'P625', None))
|
||||
urls.append({'title': 'Wikidata',
|
||||
'url': 'https://www.wikidata.org/wiki/'
|
||||
+ wikidata_id + '?uselang=' + language})
|
||||
|
||||
add_url(urls,
|
||||
'Wikidata',
|
||||
'https://www.wikidata.org/wiki/'
|
||||
+ wikidata_id + '?uselang=' + language)
|
||||
# INFOBOX ATTRIBUTES (ROWS)
|
||||
|
||||
musicbrainz_work_id = get_string(claims, 'P435')
|
||||
if musicbrainz_work_id is not None:
|
||||
add_url(urls,
|
||||
'MusicBrainz',
|
||||
'http://musicbrainz.org/work/'
|
||||
+ musicbrainz_work_id)
|
||||
# DATES
|
||||
# inception date
|
||||
add_attribute(attributes, result, 'P571', date=True)
|
||||
# dissolution date
|
||||
add_attribute(attributes, result, 'P576', date=True)
|
||||
# start date
|
||||
add_attribute(attributes, result, 'P580', date=True)
|
||||
# end date
|
||||
add_attribute(attributes, result, 'P582', date=True)
|
||||
# date of birth
|
||||
add_attribute(attributes, result, 'P569', date=True)
|
||||
# date of death
|
||||
add_attribute(attributes, result, 'P570', date=True)
|
||||
# date of spacecraft launch
|
||||
add_attribute(attributes, result, 'P619', date=True)
|
||||
# date of spacecraft landing
|
||||
add_attribute(attributes, result, 'P620', date=True)
|
||||
|
||||
musicbrainz_artist_id = get_string(claims, 'P434')
|
||||
if musicbrainz_artist_id is not None:
|
||||
add_url(urls,
|
||||
'MusicBrainz',
|
||||
'http://musicbrainz.org/artist/'
|
||||
+ musicbrainz_artist_id)
|
||||
# nationality
|
||||
add_attribute(attributes, result, 'P27')
|
||||
# country of origin
|
||||
add_attribute(attributes, result, 'P495')
|
||||
# country
|
||||
add_attribute(attributes, result, 'P17')
|
||||
# headquarters
|
||||
add_attribute(attributes, result, 'Q180')
|
||||
|
||||
musicbrainz_release_group_id = get_string(claims, 'P436')
|
||||
if musicbrainz_release_group_id is not None:
|
||||
add_url(urls,
|
||||
'MusicBrainz',
|
||||
'http://musicbrainz.org/release-group/'
|
||||
+ musicbrainz_release_group_id)
|
||||
# PLACES
|
||||
# capital
|
||||
add_attribute(attributes, result, 'P36', trim=True)
|
||||
# head of state
|
||||
add_attribute(attributes, result, 'P35', trim=True)
|
||||
# head of government
|
||||
add_attribute(attributes, result, 'P6', trim=True)
|
||||
# type of government
|
||||
add_attribute(attributes, result, 'P122')
|
||||
# official language
|
||||
add_attribute(attributes, result, 'P37')
|
||||
# population
|
||||
add_attribute(attributes, result, 'P1082', trim=True)
|
||||
# area
|
||||
add_attribute(attributes, result, 'P2046')
|
||||
# currency
|
||||
add_attribute(attributes, result, 'P38', trim=True)
|
||||
# heigth (building)
|
||||
add_attribute(attributes, result, 'P2048')
|
||||
|
||||
musicbrainz_label_id = get_string(claims, 'P966')
|
||||
if musicbrainz_label_id is not None:
|
||||
add_url(urls,
|
||||
'MusicBrainz',
|
||||
'http://musicbrainz.org/label/'
|
||||
+ musicbrainz_label_id)
|
||||
# MEDIA
|
||||
# platform (videogames)
|
||||
add_attribute(attributes, result, 'P400')
|
||||
# author
|
||||
add_attribute(attributes, result, 'P50')
|
||||
# creator
|
||||
add_attribute(attributes, result, 'P170')
|
||||
# director
|
||||
add_attribute(attributes, result, 'P57')
|
||||
# performer
|
||||
add_attribute(attributes, result, 'P175')
|
||||
# developer
|
||||
add_attribute(attributes, result, 'P178')
|
||||
# producer
|
||||
add_attribute(attributes, result, 'P162')
|
||||
# manufacturer
|
||||
add_attribute(attributes, result, 'P176')
|
||||
# screenwriter
|
||||
add_attribute(attributes, result, 'P58')
|
||||
# production company
|
||||
add_attribute(attributes, result, 'P272')
|
||||
# record label
|
||||
add_attribute(attributes, result, 'P264')
|
||||
# publisher
|
||||
add_attribute(attributes, result, 'P123')
|
||||
# original network
|
||||
add_attribute(attributes, result, 'P449')
|
||||
# distributor
|
||||
add_attribute(attributes, result, 'P750')
|
||||
# composer
|
||||
add_attribute(attributes, result, 'P86')
|
||||
# publication date
|
||||
add_attribute(attributes, result, 'P577', date=True)
|
||||
# genre
|
||||
add_attribute(attributes, result, 'P136')
|
||||
# original language
|
||||
add_attribute(attributes, result, 'P364')
|
||||
# isbn
|
||||
add_attribute(attributes, result, 'Q33057')
|
||||
# software license
|
||||
add_attribute(attributes, result, 'P275')
|
||||
# programming language
|
||||
add_attribute(attributes, result, 'P277')
|
||||
# version
|
||||
add_attribute(attributes, result, 'P348', trim=True)
|
||||
# narrative location
|
||||
add_attribute(attributes, result, 'P840')
|
||||
|
||||
# musicbrainz_area_id = get_string(claims, 'P982')
|
||||
# P1407 MusicBrainz series ID
|
||||
# P1004 MusicBrainz place ID
|
||||
# P1330 MusicBrainz instrument ID
|
||||
# P1407 MusicBrainz series ID
|
||||
# LANGUAGES
|
||||
# number of speakers
|
||||
add_attribute(attributes, result, 'P1098')
|
||||
# writing system
|
||||
add_attribute(attributes, result, 'P282')
|
||||
# regulatory body
|
||||
add_attribute(attributes, result, 'P1018')
|
||||
# language code
|
||||
add_attribute(attributes, result, 'P218')
|
||||
|
||||
postal_code = get_string(claims, 'P281', None)
|
||||
if postal_code is not None:
|
||||
attributes.append({'label': 'Postal code(s)', 'value': postal_code})
|
||||
# OTHER
|
||||
# ceo
|
||||
add_attribute(attributes, result, 'P169', trim=True)
|
||||
# founder
|
||||
add_attribute(attributes, result, 'P112')
|
||||
# legal form (company/organization)
|
||||
add_attribute(attributes, result, 'P1454')
|
||||
# operator
|
||||
add_attribute(attributes, result, 'P137')
|
||||
# crew members (tripulation)
|
||||
add_attribute(attributes, result, 'P1029')
|
||||
# taxon
|
||||
add_attribute(attributes, result, 'P225')
|
||||
# chemical formula
|
||||
add_attribute(attributes, result, 'P274')
|
||||
# winner (sports/contests)
|
||||
add_attribute(attributes, result, 'P1346')
|
||||
# number of deaths
|
||||
add_attribute(attributes, result, 'P1120')
|
||||
# currency code
|
||||
add_attribute(attributes, result, 'P498')
|
||||
|
||||
date_of_birth = get_time(claims, 'P569', locale, None)
|
||||
if date_of_birth is not None:
|
||||
attributes.append({'label': 'Date of birth', 'value': date_of_birth})
|
||||
|
||||
date_of_death = get_time(claims, 'P570', locale, None)
|
||||
if date_of_death is not None:
|
||||
attributes.append({'label': 'Date of death', 'value': date_of_death})
|
||||
image = add_image(result)
|
||||
|
||||
if len(attributes) == 0 and len(urls) == 2 and len(description) == 0:
|
||||
results.append({
|
||||
|
@ -190,6 +302,7 @@ def getDetail(jsonresponse, wikidata_id, language, locale):
|
|||
'infobox': title,
|
||||
'id': wikipedia_link,
|
||||
'content': description,
|
||||
'img_src': image,
|
||||
'attributes': attributes,
|
||||
'urls': urls
|
||||
})
|
||||
|
@ -197,92 +310,151 @@ def getDetail(jsonresponse, wikidata_id, language, locale):
|
|||
return results
|
||||
|
||||
|
||||
def add_url(urls, title, url):
|
||||
if url is not None:
|
||||
urls.append({'title': title, 'url': url})
|
||||
return 1
|
||||
# only returns first match
|
||||
def add_image(result):
|
||||
# P15: route map, P242: locator map, P154: logo, P18: image, P242: map, P41: flag, P2716: collage, P2910: icon
|
||||
property_ids = ['P15', 'P242', 'P154', 'P18', 'P242', 'P41', 'P2716', 'P2910']
|
||||
|
||||
for property_id in property_ids:
|
||||
image = result.xpath(property_xpath.replace('{propertyid}', property_id))
|
||||
if image:
|
||||
image_name = image[0].xpath(value_xpath)
|
||||
image_src = url_image.replace('{filename}', extract_text(image_name[0]))
|
||||
return image_src
|
||||
|
||||
|
||||
# setting trim will only returned high ranked rows OR the first row
|
||||
def add_attribute(attributes, result, property_id, default_label=None, date=False, trim=False):
|
||||
attribute = result.xpath(property_xpath.replace('{propertyid}', property_id))
|
||||
if attribute:
|
||||
|
||||
if default_label:
|
||||
label = default_label
|
||||
else:
|
||||
label = extract_text(attribute[0].xpath(label_xpath))
|
||||
label = label[0].upper() + label[1:]
|
||||
|
||||
if date:
|
||||
trim = True
|
||||
# remove calendar name
|
||||
calendar_name = attribute[0].xpath(calendar_name_xpath)
|
||||
for calendar in calendar_name:
|
||||
calendar.getparent().remove(calendar)
|
||||
|
||||
concat_values = ""
|
||||
values = []
|
||||
first_value = None
|
||||
for row in attribute[0].xpath(property_row_xpath):
|
||||
if not first_value or not trim or row.xpath(preferred_rank_xpath):
|
||||
|
||||
value = row.xpath(value_xpath)
|
||||
if not value:
|
||||
continue
|
||||
value = extract_text(value)
|
||||
|
||||
# save first value in case no ranked row is found
|
||||
if trim and not first_value:
|
||||
first_value = value
|
||||
else:
|
||||
# to avoid duplicate values
|
||||
if value not in values:
|
||||
concat_values += value + ", "
|
||||
values.append(value)
|
||||
|
||||
if trim and not values:
|
||||
attributes.append({'label': label,
|
||||
'value': first_value})
|
||||
else:
|
||||
attributes.append({'label': label,
|
||||
'value': concat_values[:-2]})
|
||||
|
||||
|
||||
# requires property_id unless it's a wiki link (defined in link_type)
|
||||
def add_url(urls, result, property_id=None, default_label=None, url_prefix=None, results=None, link_type=None):
|
||||
links = []
|
||||
|
||||
# wiki links don't have property in wikidata page
|
||||
if link_type and 'wiki' in link_type:
|
||||
links.append(get_wikilink(result, link_type))
|
||||
else:
|
||||
return 0
|
||||
dom_element = result.xpath(property_xpath.replace('{propertyid}', property_id))
|
||||
if dom_element:
|
||||
dom_element = dom_element[0]
|
||||
if not default_label:
|
||||
label = extract_text(dom_element.xpath(label_xpath))
|
||||
label = label[0].upper() + label[1:]
|
||||
|
||||
if link_type == 'geo':
|
||||
links.append(get_geolink(dom_element))
|
||||
|
||||
elif link_type == 'imdb':
|
||||
links.append(get_imdblink(dom_element, url_prefix))
|
||||
|
||||
else:
|
||||
url_results = dom_element.xpath(url_xpath)
|
||||
for link in url_results:
|
||||
if link is not None:
|
||||
if url_prefix:
|
||||
link = url_prefix + extract_text(link)
|
||||
else:
|
||||
link = extract_text(link)
|
||||
links.append(link)
|
||||
|
||||
# append urls
|
||||
for url in links:
|
||||
if url is not None:
|
||||
urls.append({'title': default_label or label,
|
||||
'url': url})
|
||||
if results is not None:
|
||||
results.append({'title': default_label or label,
|
||||
'url': url})
|
||||
|
||||
|
||||
def get_mainsnak(claims, propertyName):
|
||||
propValue = claims.get(propertyName, {})
|
||||
if len(propValue) == 0:
|
||||
def get_imdblink(result, url_prefix):
|
||||
imdb_id = result.xpath(value_xpath)
|
||||
if imdb_id:
|
||||
imdb_id = extract_text(imdb_id)
|
||||
id_prefix = imdb_id[:2]
|
||||
if id_prefix == 'tt':
|
||||
url = url_prefix + 'title/' + imdb_id
|
||||
elif id_prefix == 'nm':
|
||||
url = url_prefix + 'name/' + imdb_id
|
||||
elif id_prefix == 'ch':
|
||||
url = url_prefix + 'character/' + imdb_id
|
||||
elif id_prefix == 'co':
|
||||
url = url_prefix + 'company/' + imdb_id
|
||||
elif id_prefix == 'ev':
|
||||
url = url_prefix + 'event/' + imdb_id
|
||||
else:
|
||||
url = None
|
||||
return url
|
||||
|
||||
|
||||
def get_geolink(result):
|
||||
coordinates = result.xpath(value_xpath)
|
||||
if not coordinates:
|
||||
return None
|
||||
coordinates = extract_text(coordinates[0])
|
||||
latitude, longitude = coordinates.split(',')
|
||||
|
||||
propValue = propValue[0].get('mainsnak', None)
|
||||
return propValue
|
||||
|
||||
|
||||
def get_string(claims, propertyName, defaultValue=None):
|
||||
propValue = claims.get(propertyName, {})
|
||||
if len(propValue) == 0:
|
||||
return defaultValue
|
||||
|
||||
result = []
|
||||
for e in propValue:
|
||||
mainsnak = e.get('mainsnak', {})
|
||||
|
||||
datavalue = mainsnak.get('datavalue', {})
|
||||
if datavalue is not None:
|
||||
result.append(datavalue.get('value', ''))
|
||||
|
||||
if len(result) == 0:
|
||||
return defaultValue
|
||||
else:
|
||||
# TODO handle multiple urls
|
||||
return result[0]
|
||||
|
||||
|
||||
def get_time(claims, propertyName, locale, defaultValue=None):
|
||||
propValue = claims.get(propertyName, {})
|
||||
if len(propValue) == 0:
|
||||
return defaultValue
|
||||
|
||||
result = []
|
||||
for e in propValue:
|
||||
mainsnak = e.get('mainsnak', {})
|
||||
|
||||
datavalue = mainsnak.get('datavalue', {})
|
||||
if datavalue is not None:
|
||||
value = datavalue.get('value', '')
|
||||
result.append(value.get('time', ''))
|
||||
|
||||
if len(result) == 0:
|
||||
date_string = defaultValue
|
||||
else:
|
||||
date_string = ', '.join(result)
|
||||
|
||||
try:
|
||||
parsed_date = datetime.strptime(date_string, "+%Y-%m-%dT%H:%M:%SZ")
|
||||
except:
|
||||
if date_string.startswith('-'):
|
||||
return date_string.split('T')[0]
|
||||
try:
|
||||
parsed_date = dateutil_parse(date_string, fuzzy=False, default=False)
|
||||
except:
|
||||
logger.debug('could not parse date %s', date_string)
|
||||
return date_string.split('T')[0]
|
||||
|
||||
return format_date_by_locale(parsed_date, locale)
|
||||
|
||||
|
||||
def get_geolink(claims, propertyName, defaultValue=''):
|
||||
mainsnak = get_mainsnak(claims, propertyName)
|
||||
|
||||
if mainsnak is None:
|
||||
return defaultValue
|
||||
|
||||
datatype = mainsnak.get('datatype', '')
|
||||
datavalue = mainsnak.get('datavalue', {})
|
||||
|
||||
if datatype != 'globe-coordinate':
|
||||
return defaultValue
|
||||
|
||||
value = datavalue.get('value', {})
|
||||
|
||||
precision = value.get('precision', 0.0002)
|
||||
# convert to decimal
|
||||
lat = int(latitude[:latitude.find(u'°')])
|
||||
if latitude.find('\'') >= 0:
|
||||
lat += int(latitude[latitude.find(u'°') + 1:latitude.find('\'')] or 0) / 60.0
|
||||
if latitude.find('"') >= 0:
|
||||
lat += float(latitude[latitude.find('\'') + 1:latitude.find('"')] or 0) / 3600.0
|
||||
if latitude.find('S') >= 0:
|
||||
lat *= -1
|
||||
lon = int(longitude[:longitude.find(u'°')])
|
||||
if longitude.find('\'') >= 0:
|
||||
lon += int(longitude[longitude.find(u'°') + 1:longitude.find('\'')] or 0) / 60.0
|
||||
if longitude.find('"') >= 0:
|
||||
lon += float(longitude[longitude.find('\'') + 1:longitude.find('"')] or 0) / 3600.0
|
||||
if longitude.find('W') >= 0:
|
||||
lon *= -1
|
||||
|
||||
# TODO: get precision
|
||||
precision = 0.0002
|
||||
# there is no zoom information, deduce from precision (error prone)
|
||||
# samples :
|
||||
# 13 --> 5
|
||||
|
@ -298,26 +470,20 @@ def get_geolink(claims, propertyName, defaultValue=''):
|
|||
zoom = int(15 - precision * 8.8322 + precision * precision * 0.625447)
|
||||
|
||||
url = url_map\
|
||||
.replace('{latitude}', str(value.get('latitude', 0)))\
|
||||
.replace('{longitude}', str(value.get('longitude', 0)))\
|
||||
.replace('{latitude}', str(lat))\
|
||||
.replace('{longitude}', str(lon))\
|
||||
.replace('{zoom}', str(zoom))
|
||||
|
||||
return url
|
||||
|
||||
|
||||
def get_wikilink(result, wikiid):
|
||||
url = result.get('sitelinks', {}).get(wikiid, {}).get('url', None)
|
||||
if url is None:
|
||||
return url
|
||||
elif url.startswith('http://'):
|
||||
url = result.xpath(wikilink_xpath.replace('{wikiid}', wikiid))
|
||||
if not url:
|
||||
return None
|
||||
url = url[0]
|
||||
if url.startswith('http://'):
|
||||
url = url.replace('http://', 'https://')
|
||||
elif url.startswith('//'):
|
||||
url = 'https:' + url
|
||||
return url
|
||||
|
||||
|
||||
def get_wiki_firstlanguage(result, wikipatternid):
|
||||
for k in result.get('sitelinks', {}).keys():
|
||||
if k.endswith(wikipatternid) and len(k) == (2 + len(wikipatternid)):
|
||||
return k[0:2]
|
||||
return None
|
||||
|
|
|
@ -99,9 +99,8 @@ def response(resp):
|
|||
return []
|
||||
|
||||
# link to wikipedia article
|
||||
# parenthesis are not quoted to make infobox mergeable with wikidata's
|
||||
wikipedia_link = url_lang(resp.search_params['language']) \
|
||||
+ 'wiki/' + quote(title.replace(' ', '_').encode('utf8')).replace('%28', '(').replace('%29', ')')
|
||||
+ 'wiki/' + quote(title.replace(' ', '_').encode('utf8'))
|
||||
|
||||
results.append({'url': wikipedia_link, 'title': title})
|
||||
|
||||
|
|
|
@ -8,11 +8,9 @@
|
|||
# @stable no
|
||||
# @parse url, infobox
|
||||
|
||||
from cgi import escape
|
||||
from json import loads
|
||||
from time import time
|
||||
from urllib import urlencode
|
||||
from lxml.etree import XML
|
||||
|
||||
from searx.poolrequests import get as http_get
|
||||
|
||||
|
@ -36,7 +34,7 @@ search_url = url + 'input/json.jsp'\
|
|||
referer_url = url + 'input/?{query}'
|
||||
|
||||
token = {'value': '',
|
||||
'last_updated': None}
|
||||
'last_updated': 0}
|
||||
|
||||
# pods to display as image in infobox
|
||||
# this pods do return a plaintext, but they look better and are more useful as images
|
||||
|
|
|
@ -41,7 +41,7 @@ def response(resp):
|
|||
results = []
|
||||
|
||||
dom = html.fromstring(resp.text)
|
||||
regex = re.compile('3\.jpg.*$')
|
||||
regex = re.compile(r'3\.jpg.*$')
|
||||
|
||||
# parse results
|
||||
for result in dom.xpath('//div[@class="photo"]'):
|
||||
|
|
|
@ -87,7 +87,7 @@ def request(query, params):
|
|||
|
||||
fp = {'query': query}
|
||||
if paging and search_url.find('{pageno}') >= 0:
|
||||
fp['pageno'] = (params['pageno'] + first_page_num - 1) * page_size
|
||||
fp['pageno'] = (params['pageno'] - 1) * page_size + first_page_num
|
||||
|
||||
params['url'] = search_url.format(**fp)
|
||||
params['query'] = query
|
||||
|
|
|
@ -20,10 +20,12 @@ from searx.engines.xpath import extract_text, extract_url
|
|||
categories = ['general']
|
||||
paging = True
|
||||
language_support = True
|
||||
time_range_support = True
|
||||
|
||||
# search-url
|
||||
base_url = 'https://search.yahoo.com/'
|
||||
search_url = 'search?{query}&b={offset}&fl=1&vl=lang_{lang}'
|
||||
search_url_with_time = 'search?{query}&b={offset}&fl=1&vl=lang_{lang}&age={age}&btf={btf}&fr2=time'
|
||||
|
||||
# specific xpath variables
|
||||
results_xpath = "//div[contains(concat(' ', normalize-space(@class), ' '), ' Sr ')]"
|
||||
|
@ -32,6 +34,10 @@ title_xpath = './/h3/a'
|
|||
content_xpath = './/div[@class="compText aAbs"]'
|
||||
suggestion_xpath = "//div[contains(concat(' ', normalize-space(@class), ' '), ' AlsoTry ')]//a"
|
||||
|
||||
time_range_dict = {'day': ['1d', 'd'],
|
||||
'week': ['1w', 'w'],
|
||||
'month': ['1m', 'm']}
|
||||
|
||||
|
||||
# remove yahoo-specific tracking-url
|
||||
def parse_url(url_string):
|
||||
|
@ -51,18 +57,30 @@ def parse_url(url_string):
|
|||
return unquote(url_string[start:end])
|
||||
|
||||
|
||||
def _get_url(query, offset, language, time_range):
|
||||
if time_range in time_range_dict:
|
||||
return base_url + search_url_with_time.format(offset=offset,
|
||||
query=urlencode({'p': query}),
|
||||
lang=language,
|
||||
age=time_range_dict[time_range][0],
|
||||
btf=time_range_dict[time_range][1])
|
||||
return base_url + search_url.format(offset=offset,
|
||||
query=urlencode({'p': query}),
|
||||
lang=language)
|
||||
|
||||
|
||||
def _get_language(params):
|
||||
if params['language'] == 'all':
|
||||
return 'en'
|
||||
return params['language'].split('_')[0]
|
||||
|
||||
|
||||
# do search-request
|
||||
def request(query, params):
|
||||
offset = (params['pageno'] - 1) * 10 + 1
|
||||
language = _get_language(params)
|
||||
|
||||
if params['language'] == 'all':
|
||||
language = 'en'
|
||||
else:
|
||||
language = params['language'].split('_')[0]
|
||||
|
||||
params['url'] = base_url + search_url.format(offset=offset,
|
||||
query=urlencode({'p': query}),
|
||||
lang=language)
|
||||
params['url'] = _get_url(query, offset, language, params['time_range'])
|
||||
|
||||
# TODO required?
|
||||
params['cookies']['sB'] = 'fl=1&vl=lang_{lang}&sh=1&rw=new&v=1'\
|
||||
|
|
|
@ -55,7 +55,7 @@ def request(query, params):
|
|||
|
||||
def sanitize_url(url):
|
||||
if ".yahoo.com/" in url:
|
||||
return re.sub(u"\;\_ylt\=.+$", "", url)
|
||||
return re.sub(u"\\;\\_ylt\\=.+$", "", url)
|
||||
else:
|
||||
return url
|
||||
|
||||
|
|
|
@ -19,15 +19,17 @@ from searx import logger
|
|||
|
||||
logger = logger.getChild('plugins')
|
||||
|
||||
from searx.plugins import (https_rewrite,
|
||||
from searx.plugins import (doai_rewrite,
|
||||
https_rewrite,
|
||||
infinite_scroll,
|
||||
open_results_on_new_tab,
|
||||
self_info,
|
||||
search_on_category_select,
|
||||
tracker_url_remover,
|
||||
vim_hotkeys)
|
||||
|
||||
required_attrs = (('name', str),
|
||||
('description', str),
|
||||
required_attrs = (('name', (str, unicode)),
|
||||
('description', (str, unicode)),
|
||||
('default_on', bool))
|
||||
|
||||
optional_attrs = (('js_dependencies', tuple),
|
||||
|
@ -73,7 +75,9 @@ class PluginStore():
|
|||
|
||||
|
||||
plugins = PluginStore()
|
||||
plugins.register(doai_rewrite)
|
||||
plugins.register(https_rewrite)
|
||||
plugins.register(infinite_scroll)
|
||||
plugins.register(open_results_on_new_tab)
|
||||
plugins.register(self_info)
|
||||
plugins.register(search_on_category_select)
|
||||
|
|
|
@ -0,0 +1,31 @@
|
|||
from flask_babel import gettext
|
||||
import re
|
||||
from urlparse import urlparse, parse_qsl
|
||||
|
||||
regex = re.compile(r'10\.\d{4,9}/[^\s]+')
|
||||
|
||||
name = gettext('DOAI rewrite')
|
||||
description = gettext('Avoid paywalls by redirecting to open-access versions of publications when available')
|
||||
default_on = False
|
||||
|
||||
|
||||
def extract_doi(url):
|
||||
match = regex.search(url.path)
|
||||
if match:
|
||||
return match.group(0)
|
||||
for _, v in parse_qsl(url.query):
|
||||
match = regex.search(v)
|
||||
if match:
|
||||
return match.group(0)
|
||||
return None
|
||||
|
||||
|
||||
def on_result(request, ctx):
|
||||
doi = extract_doi(ctx['result']['parsed_url'])
|
||||
if doi and len(doi) < 50:
|
||||
for suffix in ('/', '.pdf', '/full', '/meta', '/abstract'):
|
||||
if doi.endswith(suffix):
|
||||
doi = doi[:-len(suffix)]
|
||||
ctx['result']['url'] = 'http://doai.io/' + doi
|
||||
ctx['result']['parsed_url'] = urlparse(ctx['result']['url'])
|
||||
return True
|
|
@ -21,7 +21,7 @@ from lxml import etree
|
|||
from os import listdir, environ
|
||||
from os.path import isfile, isdir, join
|
||||
from searx.plugins import logger
|
||||
from flask.ext.babel import gettext
|
||||
from flask_babel import gettext
|
||||
from searx import searx_dir
|
||||
|
||||
|
||||
|
@ -87,7 +87,7 @@ def load_single_https_ruleset(rules_path):
|
|||
|
||||
# convert host-rule to valid regex
|
||||
host = ruleset.attrib.get('host')\
|
||||
.replace('.', '\.').replace('*', '.*')
|
||||
.replace('.', r'\.').replace('*', '.*')
|
||||
|
||||
# append to host list
|
||||
hosts.append(host)
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
from flask_babel import gettext
|
||||
|
||||
name = gettext('Infinite scroll')
|
||||
description = gettext('Automatically load next page when scrolling to bottom of current page')
|
||||
default_on = False
|
||||
|
||||
js_dependencies = ('plugins/js/infinite_scroll.js',)
|
||||
css_dependencies = ('plugins/css/infinite_scroll.css',)
|
|
@ -14,7 +14,7 @@ along with searx. If not, see < http://www.gnu.org/licenses/ >.
|
|||
|
||||
(C) 2016 by Adam Tauber, <asciimoo@gmail.com>
|
||||
'''
|
||||
from flask.ext.babel import gettext
|
||||
from flask_babel import gettext
|
||||
name = gettext('Open result links on new browser tabs')
|
||||
description = gettext('Results are opened in the same window by default. '
|
||||
'This plugin overwrites the default behaviour to open links on new tabs/windows. '
|
||||
|
|
|
@ -14,7 +14,7 @@ along with searx. If not, see < http://www.gnu.org/licenses/ >.
|
|||
|
||||
(C) 2015 by Adam Tauber, <asciimoo@gmail.com>
|
||||
'''
|
||||
from flask.ext.babel import gettext
|
||||
from flask_babel import gettext
|
||||
name = gettext('Search on category select')
|
||||
description = gettext('Perform search immediately if a category selected. '
|
||||
'Disable to select multiple categories. (JavaScript required)')
|
||||
|
|
|
@ -14,7 +14,7 @@ along with searx. If not, see < http://www.gnu.org/licenses/ >.
|
|||
|
||||
(C) 2015 by Adam Tauber, <asciimoo@gmail.com>
|
||||
'''
|
||||
from flask.ext.babel import gettext
|
||||
from flask_babel import gettext
|
||||
import re
|
||||
name = "Self Informations"
|
||||
description = gettext('Displays your IP if the query is "ip" and your user agent if the query contains "user agent".')
|
||||
|
@ -29,6 +29,8 @@ p = re.compile('.*user[ -]agent.*', re.IGNORECASE)
|
|||
# request: flask request object
|
||||
# ctx: the whole local context of the pre search hook
|
||||
def post_search(request, ctx):
|
||||
if ctx['search'].pageno > 1:
|
||||
return True
|
||||
if ctx['search'].query == 'ip':
|
||||
x_forwarded_for = request.headers.getlist("X-Forwarded-For")
|
||||
if x_forwarded_for:
|
||||
|
|
|
@ -15,7 +15,7 @@ along with searx. If not, see < http://www.gnu.org/licenses/ >.
|
|||
(C) 2015 by Adam Tauber, <asciimoo@gmail.com>
|
||||
'''
|
||||
|
||||
from flask.ext.babel import gettext
|
||||
from flask_babel import gettext
|
||||
import re
|
||||
from urlparse import urlunparse
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
from flask.ext.babel import gettext
|
||||
from flask_babel import gettext
|
||||
|
||||
name = gettext('Vim-like hotkeys')
|
||||
description = gettext('Navigate search results with Vim-like hotkeys '
|
||||
|
|
|
@ -166,6 +166,7 @@ class SwitchableSetting(Setting):
|
|||
|
||||
|
||||
class EnginesSetting(SwitchableSetting):
|
||||
|
||||
def _post_init(self):
|
||||
super(EnginesSetting, self)._post_init()
|
||||
transformed_choices = []
|
||||
|
@ -191,6 +192,7 @@ class EnginesSetting(SwitchableSetting):
|
|||
|
||||
|
||||
class PluginsSetting(SwitchableSetting):
|
||||
|
||||
def _post_init(self):
|
||||
super(PluginsSetting, self)._post_init()
|
||||
transformed_choices = []
|
||||
|
@ -225,7 +227,8 @@ class Preferences(object):
|
|||
'safesearch': MapSetting(settings['search']['safe_search'], map={'0': 0,
|
||||
'1': 1,
|
||||
'2': 2}),
|
||||
'theme': EnumStringSetting(settings['ui']['default_theme'], choices=themes)}
|
||||
'theme': EnumStringSetting(settings['ui']['default_theme'], choices=themes),
|
||||
'results_on_new_tab': MapSetting(False, map={'0': False, '1': True})}
|
||||
|
||||
self.engines = EnginesSetting('engines', choices=engines)
|
||||
self.plugins = PluginsSetting('plugins', choices=plugins)
|
||||
|
|
|
@ -5,7 +5,7 @@ from threading import RLock
|
|||
from urlparse import urlparse, unquote
|
||||
from searx.engines import engines
|
||||
|
||||
CONTENT_LEN_IGNORED_CHARS_REGEX = re.compile('[,;:!?\./\\\\ ()-_]', re.M | re.U)
|
||||
CONTENT_LEN_IGNORED_CHARS_REGEX = re.compile(r'[,;:!?\./\\\\ ()-_]', re.M | re.U)
|
||||
WHITESPACE_REGEX = re.compile('( |\t|\n)+', re.M | re.U)
|
||||
|
||||
|
||||
|
@ -18,7 +18,17 @@ def result_content_len(content):
|
|||
|
||||
|
||||
def compare_urls(url_a, url_b):
|
||||
if url_a.netloc != url_b.netloc or url_a.query != url_b.query:
|
||||
# ignore www. in comparison
|
||||
if url_a.netloc.startswith('www.'):
|
||||
host_a = url_a.netloc.replace('www.', '', 1)
|
||||
else:
|
||||
host_a = url_a.netloc
|
||||
if url_b.netloc.startswith('www.'):
|
||||
host_b = url_b.netloc.replace('www.', '', 1)
|
||||
else:
|
||||
host_b = url_b.netloc
|
||||
|
||||
if host_a != host_b or url_a.query != url_b.query or url_a.fragment != url_b.fragment:
|
||||
return False
|
||||
|
||||
# remove / from the end of the url if required
|
||||
|
@ -33,25 +43,42 @@ def compare_urls(url_a, url_b):
|
|||
|
||||
|
||||
def merge_two_infoboxes(infobox1, infobox2):
|
||||
# get engines weights
|
||||
if hasattr(engines[infobox1['engine']], 'weight'):
|
||||
weight1 = engines[infobox1['engine']].weight
|
||||
else:
|
||||
weight1 = 1
|
||||
if hasattr(engines[infobox2['engine']], 'weight'):
|
||||
weight2 = engines[infobox2['engine']].weight
|
||||
else:
|
||||
weight2 = 1
|
||||
|
||||
if weight2 > weight1:
|
||||
infobox1['engine'] = infobox2['engine']
|
||||
|
||||
if 'urls' in infobox2:
|
||||
urls1 = infobox1.get('urls', None)
|
||||
if urls1 is None:
|
||||
urls1 = []
|
||||
infobox1['urls'] = urls1
|
||||
|
||||
urlSet = set()
|
||||
for url in infobox1.get('urls', []):
|
||||
urlSet.add(url.get('url', None))
|
||||
for url2 in infobox2.get('urls', []):
|
||||
unique_url = True
|
||||
for url1 in infobox1.get('urls', []):
|
||||
if compare_urls(urlparse(url1.get('url', '')), urlparse(url2.get('url', ''))):
|
||||
unique_url = False
|
||||
break
|
||||
if unique_url:
|
||||
urls1.append(url2)
|
||||
|
||||
for url in infobox2.get('urls', []):
|
||||
if url.get('url', None) not in urlSet:
|
||||
urls1.append(url)
|
||||
infobox1['urls'] = urls1
|
||||
|
||||
if 'img_src' in infobox2:
|
||||
img1 = infobox1.get('img_src', None)
|
||||
img2 = infobox2.get('img_src')
|
||||
if img1 is None:
|
||||
infobox1['img_src'] = img2
|
||||
elif weight2 > weight1:
|
||||
infobox1['img_src'] = img2
|
||||
|
||||
if 'attributes' in infobox2:
|
||||
attributes1 = infobox1.get('attributes', None)
|
||||
|
@ -65,7 +92,8 @@ def merge_two_infoboxes(infobox1, infobox2):
|
|||
attributeSet.add(attribute.get('label', None))
|
||||
|
||||
for attribute in infobox2.get('attributes', []):
|
||||
attributes1.append(attribute)
|
||||
if attribute.get('label', None) not in attributeSet:
|
||||
attributes1.append(attribute)
|
||||
|
||||
if 'content' in infobox2:
|
||||
content1 = infobox1.get('content', None)
|
||||
|
@ -91,15 +119,15 @@ def result_score(result):
|
|||
|
||||
class ResultContainer(object):
|
||||
"""docstring for ResultContainer"""
|
||||
|
||||
def __init__(self):
|
||||
super(ResultContainer, self).__init__()
|
||||
self.results = defaultdict(list)
|
||||
self._merged_results = []
|
||||
self.infoboxes = []
|
||||
self._infobox_ids = {}
|
||||
self.suggestions = set()
|
||||
self.answers = set()
|
||||
self.number_of_results = 0
|
||||
self._number_of_results = []
|
||||
|
||||
def extend(self, engine_name, results):
|
||||
for result in list(results):
|
||||
|
@ -113,7 +141,7 @@ class ResultContainer(object):
|
|||
self._merge_infobox(result)
|
||||
results.remove(result)
|
||||
elif 'number_of_results' in result:
|
||||
self.number_of_results = max(self.number_of_results, result['number_of_results'])
|
||||
self._number_of_results.append(result['number_of_results'])
|
||||
results.remove(result)
|
||||
|
||||
with RLock():
|
||||
|
@ -137,14 +165,13 @@ class ResultContainer(object):
|
|||
add_infobox = True
|
||||
infobox_id = infobox.get('id', None)
|
||||
if infobox_id is not None:
|
||||
existingIndex = self._infobox_ids.get(infobox_id, None)
|
||||
if existingIndex is not None:
|
||||
merge_two_infoboxes(self.infoboxes[existingIndex], infobox)
|
||||
add_infobox = False
|
||||
for existingIndex in self.infoboxes:
|
||||
if compare_urls(urlparse(existingIndex.get('id', '')), urlparse(infobox_id)):
|
||||
merge_two_infoboxes(existingIndex, infobox)
|
||||
add_infobox = False
|
||||
|
||||
if add_infobox:
|
||||
self.infoboxes.append(infobox)
|
||||
self._infobox_ids[infobox_id] = len(self.infoboxes) - 1
|
||||
|
||||
def _merge_result(self, result, position):
|
||||
result['parsed_url'] = urlparse(result['url'])
|
||||
|
@ -154,11 +181,6 @@ class ResultContainer(object):
|
|||
result['parsed_url'] = result['parsed_url']._replace(scheme="http")
|
||||
result['url'] = result['parsed_url'].geturl()
|
||||
|
||||
result['host'] = result['parsed_url'].netloc
|
||||
|
||||
if result['host'].startswith('www.'):
|
||||
result['host'] = result['host'].replace('www.', '', 1)
|
||||
|
||||
result['engines'] = [result['engine']]
|
||||
|
||||
# strip multiple spaces and cariage returns from content
|
||||
|
@ -252,3 +274,9 @@ class ResultContainer(object):
|
|||
|
||||
def results_length(self):
|
||||
return len(self._merged_results)
|
||||
|
||||
def results_number(self):
|
||||
resultnum_sum = sum(self._number_of_results)
|
||||
if not resultnum_sum or not self._number_of_results:
|
||||
return 0
|
||||
return resultnum_sum / len(self._number_of_results)
|
||||
|
|
|
@ -15,14 +15,15 @@ along with searx. If not, see < http://www.gnu.org/licenses/ >.
|
|||
(C) 2013- by Adam Tauber, <asciimoo@gmail.com>
|
||||
'''
|
||||
|
||||
import gc
|
||||
import threading
|
||||
import searx.poolrequests as requests_lib
|
||||
from thread import start_new_thread
|
||||
from time import time
|
||||
from searx import settings
|
||||
from uuid import uuid4
|
||||
import searx.poolrequests as requests_lib
|
||||
from searx.engines import (
|
||||
categories, engines
|
||||
)
|
||||
from searx.languages import language_codes
|
||||
from searx.utils import gen_useragent
|
||||
from searx.query import Query
|
||||
from searx.results import ResultContainer
|
||||
|
@ -56,19 +57,20 @@ def search_request_wrapper(fn, url, engine_name, **kwargs):
|
|||
def threaded_requests(requests):
|
||||
timeout_limit = max(r[2]['timeout'] for r in requests)
|
||||
search_start = time()
|
||||
search_id = uuid4().__str__()
|
||||
for fn, url, request_args, engine_name in requests:
|
||||
request_args['timeout'] = timeout_limit
|
||||
th = threading.Thread(
|
||||
target=search_request_wrapper,
|
||||
args=(fn, url, engine_name),
|
||||
kwargs=request_args,
|
||||
name='search_request',
|
||||
name=search_id,
|
||||
)
|
||||
th._engine_name = engine_name
|
||||
th.start()
|
||||
|
||||
for th in threading.enumerate():
|
||||
if th.name == 'search_request':
|
||||
if th.name == search_id:
|
||||
remaining_time = max(0.0, timeout_limit - (time() - search_start))
|
||||
th.join(remaining_time)
|
||||
if th.isAlive():
|
||||
|
@ -138,6 +140,8 @@ class Search(object):
|
|||
self.paging = False
|
||||
self.pageno = 1
|
||||
self.lang = 'all'
|
||||
self.time_range = None
|
||||
self.is_advanced = None
|
||||
|
||||
# set blocked engines
|
||||
self.disabled_engines = request.preferences.engines.get_disabled()
|
||||
|
@ -178,9 +182,10 @@ class Search(object):
|
|||
if len(query_obj.languages):
|
||||
self.lang = query_obj.languages[-1]
|
||||
|
||||
self.engines = query_obj.engines
|
||||
self.time_range = self.request_data.get('time_range')
|
||||
self.is_advanced = self.request_data.get('advanced_search')
|
||||
|
||||
self.categories = []
|
||||
self.engines = query_obj.engines
|
||||
|
||||
# if engines are calculated from query,
|
||||
# set categories by using that informations
|
||||
|
@ -279,6 +284,9 @@ class Search(object):
|
|||
if self.lang != 'all' and not engine.language_support:
|
||||
continue
|
||||
|
||||
if self.time_range and not engine.time_range_support:
|
||||
continue
|
||||
|
||||
# set default request parameters
|
||||
request_params = default_request_params()
|
||||
request_params['headers']['User-Agent'] = user_agent
|
||||
|
@ -293,6 +301,8 @@ class Search(object):
|
|||
|
||||
# 0 = None, 1 = Moderate, 2 = Strict
|
||||
request_params['safesearch'] = request.preferences.get_value('safesearch')
|
||||
request_params['time_range'] = self.time_range
|
||||
request_params['advanced_search'] = self.is_advanced
|
||||
|
||||
# update request parameters dependent on
|
||||
# search-engine (contained in engines folder)
|
||||
|
@ -339,6 +349,7 @@ class Search(object):
|
|||
return self
|
||||
# send all search-request
|
||||
threaded_requests(requests)
|
||||
start_new_thread(gc.collect, tuple())
|
||||
|
||||
# return results, suggestions, answers and infoboxes
|
||||
return self
|
||||
|
|
|
@ -25,7 +25,7 @@ outgoing: # communication with search engines
|
|||
pool_maxsize : 10 # Number of simultaneous requests by host
|
||||
# uncomment below section if you want to use a proxy
|
||||
# see http://docs.python-requests.org/en/latest/user/advanced/#proxies
|
||||
# SOCKS proxies are not supported : see https://github.com/kennethreitz/requests/pull/478
|
||||
# SOCKS proxies are also supported: see http://docs.python-requests.org/en/master/user/advanced/#socks
|
||||
# proxies :
|
||||
# http : http://127.0.0.1:8080
|
||||
# https: http://127.0.0.1:8080
|
||||
|
@ -84,9 +84,15 @@ engines:
|
|||
disabled : True
|
||||
shortcut : bb
|
||||
|
||||
- name : btdigg
|
||||
engine : btdigg
|
||||
shortcut : bt
|
||||
- name : crossref
|
||||
engine : json_engine
|
||||
paging : True
|
||||
search_url : http://search.crossref.org/dois?q={query}&page={pageno}
|
||||
url_query : doi
|
||||
title_query : title
|
||||
content_query : fullCitation
|
||||
categories : science
|
||||
shortcut : cr
|
||||
|
||||
- name : currency
|
||||
engine : currency_convert
|
||||
|
@ -105,6 +111,13 @@ engines:
|
|||
- name : ddg definitions
|
||||
engine : duckduckgo_definitions
|
||||
shortcut : ddd
|
||||
weight : 2
|
||||
disabled : True
|
||||
|
||||
- name : digbt
|
||||
engine : digbt
|
||||
shortcut : dbt
|
||||
timeout : 6.0
|
||||
disabled : True
|
||||
|
||||
- name : digg
|
||||
|
@ -127,10 +140,12 @@ engines:
|
|||
- name : wikidata
|
||||
engine : wikidata
|
||||
shortcut : wd
|
||||
weight : 2
|
||||
|
||||
- name : duckduckgo
|
||||
engine : duckduckgo
|
||||
shortcut : ddg
|
||||
disabled : True
|
||||
|
||||
# api-key required: http://www.faroo.com/hp/api/api.html#key
|
||||
# - name : faroo
|
||||
|
@ -200,6 +215,20 @@ engines:
|
|||
engine : google_news
|
||||
shortcut : gon
|
||||
|
||||
- name : google scholar
|
||||
engine : xpath
|
||||
paging : True
|
||||
search_url : https://scholar.google.com/scholar?start={pageno}&q={query}&hl=en&as_sdt=0,5&as_vis=1
|
||||
results_xpath : //div[@class="gs_r"]/div[@class="gs_ri"]
|
||||
url_xpath : .//h3/a/@href
|
||||
title_xpath : .//h3/a
|
||||
content_xpath : .//div[@class="gs_rs"]
|
||||
suggestion_xpath : //div[@id="gs_qsuggest"]/ul/li
|
||||
page_size : 10
|
||||
first_page_num : 0
|
||||
categories : science
|
||||
shortcut : gos
|
||||
|
||||
- name : google play apps
|
||||
engine : xpath
|
||||
search_url : https://play.google.com/store/search?q={query}&c=apps
|
||||
|
@ -254,6 +283,37 @@ engines:
|
|||
disabled : True
|
||||
shortcut : habr
|
||||
|
||||
- name : hoogle
|
||||
engine : json_engine
|
||||
paging : True
|
||||
search_url : https://www.haskell.org/hoogle/?mode=json&hoogle={query}&start={pageno}
|
||||
results_query : results
|
||||
url_query : location
|
||||
title_query : self
|
||||
content_query : docs
|
||||
page_size : 20
|
||||
categories : it
|
||||
shortcut : ho
|
||||
|
||||
- name : ina
|
||||
engine : ina
|
||||
shortcut : in
|
||||
timeout : 6.0
|
||||
disabled : True
|
||||
|
||||
- name : microsoft academic
|
||||
engine : json_engine
|
||||
paging : True
|
||||
search_url : https://academic.microsoft.com/api/search/GetEntityResults?query=%40{query}%40&filters=&offset={pageno}&limit=8&correlationId=undefined
|
||||
results_query : results
|
||||
url_query : u
|
||||
title_query : dn
|
||||
content_query : d
|
||||
page_size : 8
|
||||
first_page_num : 0
|
||||
categories : science
|
||||
shortcut : ma
|
||||
|
||||
- name : mixcloud
|
||||
engine : mixcloud
|
||||
shortcut : mc
|
||||
|
@ -267,6 +327,18 @@ engines:
|
|||
engine : openstreetmap
|
||||
shortcut : osm
|
||||
|
||||
- name : openrepos
|
||||
engine : xpath
|
||||
paging : True
|
||||
search_url : https://openrepos.net/search/node/{query}?page={pageno}
|
||||
url_xpath : //li[@class="search-result"]//h3[@class="title"]/a/@href
|
||||
title_xpath : //li[@class="search-result"]//h3[@class="title"]/a
|
||||
content_xpath : //li[@class="search-result"]//div[@class="search-snippet-info"]//p[@class="search-snippet"]
|
||||
categories : files
|
||||
timeout : 4.0
|
||||
disabled : True
|
||||
shortcut : or
|
||||
|
||||
- name : photon
|
||||
engine : photon
|
||||
shortcut : ph
|
||||
|
@ -274,7 +346,8 @@ engines:
|
|||
- name : piratebay
|
||||
engine : piratebay
|
||||
shortcut : tpb
|
||||
disabled : True
|
||||
url: https://pirateproxy.red/
|
||||
timeout : 3.0
|
||||
|
||||
- name : qwant
|
||||
engine : qwant
|
||||
|
@ -304,9 +377,10 @@ engines:
|
|||
timeout : 10.0
|
||||
disabled : True
|
||||
|
||||
- name : kickass
|
||||
engine : kickass
|
||||
shortcut : ka
|
||||
- name : scanr_structures
|
||||
shortcut: scs
|
||||
engine : scanr_structures
|
||||
disabled : True
|
||||
|
||||
- name : soundcloud
|
||||
engine : soundcloud
|
||||
|
@ -361,11 +435,6 @@ engines:
|
|||
timeout : 6.0
|
||||
disabled : True
|
||||
|
||||
- name : torrentz
|
||||
engine : torrentz
|
||||
timeout : 5.0
|
||||
shortcut : to
|
||||
|
||||
- name : twitter
|
||||
engine : twitter
|
||||
shortcut : tw
|
||||
|
@ -426,6 +495,19 @@ engines:
|
|||
timeout: 6.0
|
||||
categories : science
|
||||
|
||||
- name : dictzone
|
||||
engine : dictzone
|
||||
shortcut : dc
|
||||
|
||||
- name : mymemory translated
|
||||
engine : translated
|
||||
shortcut : tl
|
||||
timeout : 5.0
|
||||
disabled : True
|
||||
# You can use without an API key, but you are limited to 1000 words/day
|
||||
# See : http://mymemory.translated.net/doc/usagelimits.php
|
||||
# api_key : ''
|
||||
|
||||
#The blekko technology and team have joined IBM Watson! -> https://blekko.com/
|
||||
# - name : blekko images
|
||||
# engine : blekko_images
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
@keyframes rotate-forever {
|
||||
0% { transform: rotate(0deg) }
|
||||
100% { transform: rotate(360deg) }
|
||||
}
|
||||
.loading-spinner {
|
||||
animation-duration: 0.75s;
|
||||
animation-iteration-count: infinite;
|
||||
animation-name: rotate-forever;
|
||||
animation-timing-function: linear;
|
||||
height: 30px;
|
||||
width: 30px;
|
||||
border: 8px solid #666;
|
||||
border-right-color: transparent;
|
||||
border-radius: 50% !important;
|
||||
margin: 0 auto;
|
||||
}
|
|
@ -0,0 +1,18 @@
|
|||
$(document).ready(function() {
|
||||
var win = $(window);
|
||||
win.scroll(function() {
|
||||
if ($(document).height() - win.height() == win.scrollTop()) {
|
||||
var formData = $('#pagination form:last').serialize();
|
||||
if (formData) {
|
||||
$('#pagination').html('<div class="loading-spinner"></div>');
|
||||
$.post('/', formData, function (data) {
|
||||
var body = $(data);
|
||||
$('#pagination').remove();
|
||||
$('#main_results').append('<hr/>');
|
||||
$('#main_results').append(body.find('.result'));
|
||||
$('#main_results').append(body.find('#pagination'));
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
|
@ -4,13 +4,16 @@ $(document).ready(function() {
|
|||
$('#categories input[type="checkbox"]').each(function(i, checkbox) {
|
||||
$(checkbox).prop('checked', false);
|
||||
});
|
||||
$('#categories label').removeClass('btn-primary').removeClass('active').addClass('btn-default');
|
||||
$(this).removeClass('btn-default').addClass('btn-primary').addClass('active');
|
||||
$($(this).children()[0]).prop('checked', 'checked');
|
||||
$(document.getElementById($(this).attr("for"))).prop('checked', true);
|
||||
if($('#q').val()) {
|
||||
$('#search_form').submit();
|
||||
}
|
||||
return false;
|
||||
});
|
||||
$('#time-range > option').click(function(e) {
|
||||
if($('#q').val()) {
|
||||
$('#search_form').submit();
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -122,17 +122,13 @@ $(document).ready(function(){
|
|||
var map = L.map(leaflet_target);
|
||||
|
||||
// create the tile layer with correct attribution
|
||||
var osmMapnikUrl='https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png';
|
||||
var osmMapnikAttrib='Map data © <a href="https://openstreetmap.org">OpenStreetMap</a> contributors';
|
||||
var osmMapnik = new L.TileLayer(osmMapnikUrl, {minZoom: 1, maxZoom: 19, attribution: osmMapnikAttrib});
|
||||
|
||||
var osmMapquestUrl='http://otile{s}.mqcdn.com/tiles/1.0.0/map/{z}/{x}/{y}.jpg';
|
||||
var osmMapquestAttrib='Map data © <a href="https://openstreetmap.org">OpenStreetMap</a> contributors | Tiles Courtesy of <a href="http://www.mapquest.com/" target="_blank">MapQuest</a> <img src="http://developer.mapquest.com/content/osm/mq_logo.png">';
|
||||
var osmMapquest = new L.TileLayer(osmMapquestUrl, {minZoom: 1, maxZoom: 18, subdomains: '1234', attribution: osmMapquestAttrib});
|
||||
|
||||
var osmMapquestOpenAerialUrl='http://otile{s}.mqcdn.com/tiles/1.0.0/sat/{z}/{x}/{y}.jpg';
|
||||
var osmMapquestOpenAerialAttrib='Map data © <a href="https://openstreetmap.org">OpenStreetMap</a> contributors | Tiles Courtesy of <a href="http://www.mapquest.com/" target="_blank">MapQuest</a> <img src="https://developer.mapquest.com/content/osm/mq_logo.png"> | Portions Courtesy NASA/JPL-Caltech and U.S. Depart. of Agriculture, Farm Service Agency';
|
||||
var osmMapquestOpenAerial = new L.TileLayer(osmMapquestOpenAerialUrl, {minZoom: 1, maxZoom: 11, subdomains: '1234', attribution: osmMapquestOpenAerialAttrib});
|
||||
var osmMapnikUrl='https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png';
|
||||
var osmMapnikAttrib='Map data © <a href="https://openstreetmap.org">OpenStreetMap</a> contributors';
|
||||
var osmMapnik = new L.TileLayer(osmMapnikUrl, {minZoom: 1, maxZoom: 19, attribution: osmMapnikAttrib});
|
||||
|
||||
var osmWikimediaUrl='https://maps.wikimedia.org/osm-intl/{z}/{x}/{y}.png';
|
||||
var osmWikimediaAttrib = 'Wikimedia maps beta | Maps data © <a href="https://openstreetmap.org">OpenStreetMap</a> contributors';
|
||||
var osmWikimedia = new L.TileLayer(osmWikimediaUrl, {minZoom: 1, maxZoom: 19, attribution: osmWikimediaAttrib});
|
||||
|
||||
// init map view
|
||||
if(map_bounds) {
|
||||
|
@ -149,12 +145,11 @@ $(document).ready(function(){
|
|||
map.setView(new L.LatLng(map_lat, map_lon),8);
|
||||
}
|
||||
|
||||
map.addLayer(osmMapquest);
|
||||
map.addLayer(osmMapnik);
|
||||
|
||||
var baseLayers = {
|
||||
"OSM Mapnik": osmMapnik,
|
||||
"MapQuest": osmMapquest/*,
|
||||
"MapQuest Open Aerial": osmMapquestOpenAerial*/
|
||||
"OSM Mapnik": osmMapnik/*,
|
||||
"OSM Wikimedia": osmWikimedia*/
|
||||
};
|
||||
|
||||
L.control.layers(baseLayers).addTo(map);
|
||||
|
|
|
@ -0,0 +1,72 @@
|
|||
#advanced-search-container {
|
||||
display: none;
|
||||
text-align: left;
|
||||
margin-bottom: 1rem;
|
||||
clear: both;
|
||||
|
||||
label, .input-group-addon {
|
||||
font-size: 1.2rem;
|
||||
font-weight:normal;
|
||||
background-color: white;
|
||||
border: @mild-gray 1px solid;
|
||||
border-right: none;
|
||||
color: @dark-gray;
|
||||
padding-bottom: 0.4rem;
|
||||
padding-right: 0.7rem;
|
||||
padding-left: 0.7rem;
|
||||
}
|
||||
|
||||
label:last-child, .input-group-addon:last-child {
|
||||
border-right: @mild-gray 1px solid;
|
||||
}
|
||||
|
||||
input[type="radio"] {
|
||||
display: none;
|
||||
}
|
||||
|
||||
input[type="radio"]:checked + label{
|
||||
color: @black;
|
||||
font-weight: bold;
|
||||
border-bottom: @light-green 5px solid;
|
||||
}
|
||||
select {
|
||||
appearance: none;
|
||||
-webkit-appearance: none;
|
||||
-moz-appearance: none;
|
||||
font-size: 1.2rem;
|
||||
font-weight:normal;
|
||||
background-color: white;
|
||||
border: @mild-gray 1px solid;
|
||||
color: @dark-gray;
|
||||
padding-bottom: 0.4rem;
|
||||
padding-top: 0.4rem;
|
||||
padding-left: 1rem;
|
||||
padding-right: 5rem;
|
||||
margin-right: 0.5rem;
|
||||
background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA8AAAAPCAQAAACR313BAAAABGdBTUEAALGPC/xhBQAAACBjSFJN
|
||||
AAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAAmJLR0QA/4ePzL8AAAAJcEhZ
|
||||
cwAABFkAAARZAVnbJUkAAAAHdElNRQfgBxgLDwB20OFsAAAAbElEQVQY073OsQ3CMAAEwJMYwJGn
|
||||
sAehpoXJItltBkmcdZBYgIIiQoLglnz3ui+eP+bk5uneteTMZJa6OJuIqvYzSJoqwqBq8gdmTTW8
|
||||
6/dghxAUq4xsVYT9laBYXCw93Aajh7GPEF23t4fkBYevGFTANkPRAAAAJXRFWHRkYXRlOmNyZWF0
|
||||
ZQAyMDE2LTA3LTI0VDExOjU1OjU4KzAyOjAwRFqFOQAAACV0RVh0ZGF0ZTptb2RpZnkAMjAxNi0w
|
||||
Ny0yNFQxMToxNTowMCswMjowMP7RDgQAAAAZdEVYdFNvZnR3YXJlAHd3dy5pbmtzY2FwZS5vcmeb
|
||||
7jwaAAAAAElFTkSuQmCC) 96% no-repeat;
|
||||
}
|
||||
}
|
||||
|
||||
#check-advanced {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#check-advanced:checked ~ #advanced-search-container {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.advanced {
|
||||
padding: 0;
|
||||
margin-top: 0.3rem;
|
||||
text-align: right;
|
||||
label, select {
|
||||
cursor: pointer;
|
||||
}
|
||||
}
|
|
@ -39,7 +39,7 @@
|
|||
padding: 0 30px;
|
||||
margin: 0;
|
||||
}
|
||||
z-index: 1;
|
||||
z-index: 10;
|
||||
}
|
||||
|
||||
// Hover color
|
||||
|
|
|
@ -0,0 +1,57 @@
|
|||
.onoff-checkbox {
|
||||
width:15%;
|
||||
}
|
||||
.onoffswitch {
|
||||
position: relative;
|
||||
width: 110px;
|
||||
-webkit-user-select:none;
|
||||
-moz-user-select:none;
|
||||
-ms-user-select: none;
|
||||
}
|
||||
.onoffswitch-checkbox {
|
||||
display: none;
|
||||
}
|
||||
.onoffswitch-label {
|
||||
display: block;
|
||||
overflow: hidden;
|
||||
cursor: pointer;
|
||||
border: 2px solid #FFFFFF !important;
|
||||
border-radius: 50px !important;
|
||||
}
|
||||
.onoffswitch-inner {
|
||||
display: block;
|
||||
transition: margin 0.3s ease-in 0s;
|
||||
}
|
||||
|
||||
.onoffswitch-inner:before, .onoffswitch-inner:after {
|
||||
display: block;
|
||||
float: left;
|
||||
width: 50%;
|
||||
height: 30px;
|
||||
padding: 0;
|
||||
line-height: 40px;
|
||||
font-size: 20px;
|
||||
box-sizing: border-box;
|
||||
content: "";
|
||||
background-color: #EEEEEE;
|
||||
}
|
||||
|
||||
.onoffswitch-switch {
|
||||
display: block;
|
||||
width: 37px;
|
||||
background-color: @light-green;
|
||||
position: absolute;
|
||||
top: 0;
|
||||
bottom: 0;
|
||||
right: 0px;
|
||||
border: 2px solid #FFFFFF !important;
|
||||
border-radius: 50px !important;
|
||||
transition: all 0.3s ease-in 0s;
|
||||
}
|
||||
.onoffswitch-checkbox:checked + .onoffswitch-label .onoffswitch-inner {
|
||||
margin-right: 0;
|
||||
}
|
||||
.onoffswitch-checkbox:checked + .onoffswitch-label .onoffswitch-switch {
|
||||
right: 71px;
|
||||
background-color: #A1A1A1;
|
||||
}
|
|
@ -6,12 +6,16 @@
|
|||
|
||||
@import "checkbox.less";
|
||||
|
||||
@import "onoff.less";
|
||||
|
||||
@import "results.less";
|
||||
|
||||
@import "infobox.less";
|
||||
|
||||
@import "search.less";
|
||||
|
||||
@import "advanced.less";
|
||||
|
||||
@import "cursor.less";
|
||||
|
||||
@import "code.less";
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
.favicon {
|
||||
margin-bottom:-3px;
|
||||
}
|
||||
|
||||
|
||||
a {
|
||||
color: @black;
|
||||
text-decoration: none;
|
||||
|
@ -18,7 +18,7 @@
|
|||
&:visited{
|
||||
color: @violet;
|
||||
}
|
||||
|
||||
|
||||
.highlight {
|
||||
background-color: @dim-gray;
|
||||
// Chrome hack: bold is different size than normal
|
||||
|
@ -64,10 +64,9 @@
|
|||
float: left !important;
|
||||
width: 24%;
|
||||
margin: .5%;
|
||||
a{
|
||||
a {
|
||||
display: block;
|
||||
width: 100%;
|
||||
height: 170px;
|
||||
background-size: cover;
|
||||
}
|
||||
}
|
||||
|
@ -148,3 +147,21 @@
|
|||
color: @gray;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.result .text-muted small {
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
.modal-wrapper {
|
||||
box-shadow: 0 5px 15px rgba(0, 0, 0, 0.5);
|
||||
}
|
||||
|
||||
.modal-wrapper {
|
||||
background-clip: padding-box;
|
||||
background-color: #fff;
|
||||
border: 1px solid rgba(0, 0, 0, 0.2);
|
||||
border-radius: 6px;
|
||||
box-shadow: 0 3px 9px rgba(0, 0, 0, 0.5);
|
||||
outline: 0 none;
|
||||
position: relative;
|
||||
}
|
||||
|
|
|
@ -1,36 +1,33 @@
|
|||
.search_categories, #categories {
|
||||
margin: 10px 0 4px 0;
|
||||
text-transform: capitalize;
|
||||
|
||||
label{
|
||||
border: none;
|
||||
box-shadow: none;
|
||||
font-size: 13px;
|
||||
padding-bottom: 2px;
|
||||
color: @gray;
|
||||
margin-bottom: 5px;
|
||||
margin-bottom: 0.5rem;
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
flex-flow: row wrap;
|
||||
align-content: stretch;
|
||||
|
||||
&:hover{
|
||||
color: @black;
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
&:active{
|
||||
box-shadow: none;
|
||||
}
|
||||
label, .input-group-addon {
|
||||
flex-grow: 1;
|
||||
flex-basis: auto;
|
||||
font-size: 1.2rem;
|
||||
font-weight: normal;
|
||||
background-color: white;
|
||||
border: @mild-gray 1px solid;
|
||||
border-right: none;
|
||||
color: @dark-gray;
|
||||
padding-bottom: 0.4rem;
|
||||
padding-top: 0.4rem;
|
||||
text-align: center;
|
||||
}
|
||||
label:last-child, .input-group-addon:last-child {
|
||||
border-right: @mild-gray 1px solid;
|
||||
}
|
||||
|
||||
.active, .btn-primary{
|
||||
input[type="checkbox"]:checked + label {
|
||||
color: @black;
|
||||
font-weight: 700;
|
||||
border-bottom: 5px solid @light-green;
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
border-bottom: @light-green 5px solid;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
#categories{
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
#main-logo{
|
||||
|
|
|
@ -2,7 +2,9 @@
|
|||
@gray: #A4A4A4;
|
||||
@dim-gray: #F6F9FA;
|
||||
@dark-gray: #666;
|
||||
@blue: #0088CC;
|
||||
@middle-gray: #F5F5F5;
|
||||
@mild-gray: #DDD;
|
||||
@blue: #0088CC;
|
||||
@red: #F35E77;
|
||||
@violet: #684898;
|
||||
@green: #2ecc71;
|
||||
|
|
|
@ -0,0 +1,49 @@
|
|||
#advanced-search-container {
|
||||
display: none;
|
||||
text-align: center;
|
||||
margin-bottom: 1rem;
|
||||
clear: both;
|
||||
|
||||
label, .input-group-addon {
|
||||
font-size: 1.3rem;
|
||||
font-weight:normal;
|
||||
background-color: white;
|
||||
border: #DDD 1px solid;
|
||||
border-right: none;
|
||||
color: #333;
|
||||
padding-bottom: 0.8rem;
|
||||
padding-left: 1.2rem;
|
||||
padding-right: 1.2rem;
|
||||
}
|
||||
|
||||
label:last-child, .input-group-addon:last-child {
|
||||
border-right: #DDD 1px solid;
|
||||
}
|
||||
|
||||
input[type="radio"] {
|
||||
display: none;
|
||||
}
|
||||
|
||||
input[type="radio"]:checked + label {
|
||||
color: black;
|
||||
font-weight: bold;
|
||||
background-color: #EEE;
|
||||
}
|
||||
}
|
||||
|
||||
#check-advanced {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#check-advanced:checked ~ #advanced-search-container {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.advanced {
|
||||
padding: 0;
|
||||
margin-top: 0.3rem;
|
||||
text-align: right;
|
||||
label, select {
|
||||
cursor: pointer;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,57 @@
|
|||
.onoff-checkbox {
|
||||
width:15%;
|
||||
}
|
||||
.onoffswitch {
|
||||
position: relative;
|
||||
width: 110px;
|
||||
-webkit-user-select:none;
|
||||
-moz-user-select:none;
|
||||
-ms-user-select: none;
|
||||
}
|
||||
.onoffswitch-checkbox {
|
||||
display: none;
|
||||
}
|
||||
.onoffswitch-label {
|
||||
display: block;
|
||||
overflow: hidden;
|
||||
cursor: pointer;
|
||||
border: 2px solid #FFFFFF !important;
|
||||
border-radius: 50px !important;
|
||||
}
|
||||
.onoffswitch-inner {
|
||||
display: block;
|
||||
transition: margin 0.3s ease-in 0s;
|
||||
}
|
||||
|
||||
.onoffswitch-inner:before, .onoffswitch-inner:after {
|
||||
display: block;
|
||||
float: left;
|
||||
width: 50%;
|
||||
height: 30px;
|
||||
padding: 0;
|
||||
line-height: 40px;
|
||||
font-size: 20px;
|
||||
box-sizing: border-box;
|
||||
content: "";
|
||||
background-color: #EEEEEE;
|
||||
}
|
||||
|
||||
.onoffswitch-switch {
|
||||
display: block;
|
||||
width: 37px;
|
||||
background-color: #00CC00;
|
||||
position: absolute;
|
||||
top: 0;
|
||||
bottom: 0;
|
||||
right: 0px;
|
||||
border: 2px solid #FFFFFF !important;
|
||||
border-radius: 50px !important;
|
||||
transition: all 0.3s ease-in 0s;
|
||||
}
|
||||
.onoffswitch-checkbox:checked + .onoffswitch-label .onoffswitch-inner {
|
||||
margin-right: 0;
|
||||
}
|
||||
.onoffswitch-checkbox:checked + .onoffswitch-label .onoffswitch-switch {
|
||||
right: 71px;
|
||||
background-color: #A1A1A1;
|
||||
}
|
|
@ -2,12 +2,16 @@
|
|||
|
||||
@import "checkbox.less";
|
||||
|
||||
@import "onoff.less";
|
||||
|
||||
@import "results.less";
|
||||
|
||||
@import "infobox.less";
|
||||
|
||||
@import "search.less";
|
||||
|
||||
@import "advanced.less";
|
||||
|
||||
@import "cursor.less";
|
||||
|
||||
@import "code.less";
|
||||
|
|
|
@ -6,10 +6,10 @@
|
|||
.favicon {
|
||||
margin-bottom:-3px;
|
||||
}
|
||||
|
||||
|
||||
a {
|
||||
vertical-align: bottom;
|
||||
|
||||
|
||||
.highlight {
|
||||
font-weight:bold;
|
||||
}
|
||||
|
@ -81,3 +81,21 @@
|
|||
color: #AAA;
|
||||
background: #FFF;
|
||||
}
|
||||
|
||||
.result .text-muted small {
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
.modal-wrapper {
|
||||
box-shadow: 0 5px 15px rgba(0, 0, 0, 0.5);
|
||||
}
|
||||
|
||||
.modal-wrapper {
|
||||
background-clip: padding-box;
|
||||
background-color: #fff;
|
||||
border: 1px solid rgba(0, 0, 0, 0.2);
|
||||
border-radius: 6px;
|
||||
box-shadow: 0 3px 9px rgba(0, 0, 0, 0.5);
|
||||
outline: 0 none;
|
||||
position: relative;
|
||||
}
|
||||
|
|
|
@ -1,4 +1,32 @@
|
|||
.search_categories {
|
||||
margin:10px 0;
|
||||
text-transform: capitalize;
|
||||
.search_categories, #categories {
|
||||
text-transform: capitalize;
|
||||
margin-bottom: 1.5rem;
|
||||
margin-top: 1.5rem;
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
align-content: stretch;
|
||||
|
||||
label, .input-group-addon {
|
||||
flex-grow: 1;
|
||||
flex-basis: auto;
|
||||
font-size: 1.3rem;
|
||||
font-weight: normal;
|
||||
background-color: white;
|
||||
border: #DDD 1px solid;
|
||||
border-right: none;
|
||||
color: #333;
|
||||
padding-bottom: 0.8rem;
|
||||
padding-top: 0.8rem;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
label:last-child, .input-group-addon:last-child {
|
||||
border-right: #DDD 1px solid;
|
||||
}
|
||||
|
||||
input[type="checkbox"]:checked + label{
|
||||
color: black;
|
||||
font-weight: bold;
|
||||
background-color: #EEE;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
{% extends "courgette/base.html" %}
|
||||
{% block content %}
|
||||
<div class="center">
|
||||
<h1>{{ _('Page not found') }}</h1>
|
||||
{% autoescape false %}
|
||||
<p>{{ _('Go to %(search_page)s.', search_page='<a href="{}">{}</a>'.decode('utf-8').format(url_for('index'), _('search page'))) }}</p>
|
||||
{% endautoescape %}
|
||||
</div>
|
||||
{% endblock %}
|
|
@ -6,20 +6,20 @@
|
|||
|
||||
<p>Searx is a <a href="https://en.wikipedia.org/wiki/Metasearch_engine">metasearch engine</a>, aggregating the results of other <a href="{{ url_for('preferences') }}">search engines</a> while not storing information about its users.
|
||||
</p>
|
||||
<h2>Why use Searx?</h2>
|
||||
<h2>Why use searx?</h2>
|
||||
<ul>
|
||||
<li>Searx may not offer you as personalised results as Google, but it doesn't generate a profile about you</li>
|
||||
<li>Searx doesn't care about what you search for, never shares anything with a third party, and it can't be used to compromise you</li>
|
||||
<li>Searx is free software, the code is 100% open and you can help to make it better. See more on <a href="https://github.com/asciimoo/searx">github</a></li>
|
||||
<li>searx may not offer you as personalised results as Google, but it doesn't generate a profile about you</li>
|
||||
<li>searx doesn't care about what you search for, never shares anything with a third party, and it can't be used to compromise you</li>
|
||||
<li>searx is free software, the code is 100% open and you can help to make it better. See more on <a href="https://github.com/asciimoo/searx">github</a></li>
|
||||
</ul>
|
||||
<p>If you do care about privacy, want to be a conscious user, or otherwise believe
|
||||
in digital freedom, make Searx your default search engine or run it on your own server</p>
|
||||
in digital freedom, make searx your default search engine or run it on your own server</p>
|
||||
|
||||
<h2>Technical details - How does it work?</h2>
|
||||
|
||||
<p>Searx is a <a href="https://en.wikipedia.org/wiki/Metasearch_engine">metasearch engine</a>,
|
||||
inspired by the <a href="http://seeks-project.info/">seeks project</a>.<br />
|
||||
It provides basic privacy by mixing your queries with searches on other platforms without storing search data. Queries are made using a POST request on every browser (except chrome*). Therefore they show up in neither our logs, nor your url history. In case of Chrome* users there is an exception, Searx uses the search bar to perform GET requests.<br />
|
||||
It provides basic privacy by mixing your queries with searches on other platforms without storing search data. Queries are made using a POST request on every browser (except chrome*). Therefore they show up in neither our logs, nor your url history. In case of Chrome* users there is an exception, searx uses the search bar to perform GET requests.<br />
|
||||
Searx can be added to your browser's search bar; moreover, it can be set as the default search engine.
|
||||
</p>
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"{% if rtl %} dir="rtl"{% endif %}>
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="description" content="Searx - a privacy-respecting, hackable metasearch engine" />
|
||||
<meta name="description" content="searx - a privacy-respecting, hackable metasearch engine" />
|
||||
<meta name="keywords" content="searx, search, search engine, metasearch, meta search" />
|
||||
<meta name="generator" content="searx/{{ searx_version }}">
|
||||
<meta name="referrer" content="no-referrer">
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
<div class="result {{ result.class }}">
|
||||
<h3 class="result_title">{% if result['favicon'] %}<img width="14" height="14" class="favicon" src="static/{{theme}}/img/icon_{{result['favicon']}}.ico" alt="{{result['favicon']}}" />{% endif %}<a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<h3 class="result_title">{% if result['favicon'] %}<img width="14" height="14" class="favicon" src="static/{{theme}}/img/icon_{{result['favicon']}}.ico" alt="{{result['favicon']}}" />{% endif %}<a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
{% if result.publishedDate %}<span class="published_date">{{ result.publishedDate }}</span>{% endif %}
|
||||
<p class="content">{% if result.img_src %}<img src="{{ image_proxify(result.img_src) }}" class="image" />{% endif %}{% if result.content %}{{ result.content|safe }}<br class="last"/>{% endif %}</p>
|
||||
{% if result.repository %}<p class="content"><a href="{{ result.repository|safe }}" rel="noreferrer">{{ result.repository }}</a></p>{% endif %}
|
||||
{% if result.repository %}<p class="content"><a href="{{ result.repository|safe }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.repository }}</a></p>{% endif %}
|
||||
<div dir="ltr">
|
||||
{{ result.codelines|code_highlighter(result.code_language)|safe }}
|
||||
</div>
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
{% endif %}
|
||||
|
||||
<div>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
{% if result.publishedDate %}<span class="published_date">{{ result.publishedDate }}</span>{% endif %}
|
||||
<p class="content">{% if result.content %}{{ result.content|safe }}<br />{% endif %}</p>
|
||||
<p class="url">{{ result.pretty_url }}‎</p>
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
<div class="image_result">
|
||||
<p>
|
||||
<a href="{{ result.img_src }}" rel="noreferrer"><img src="{% if result.thumbnail_src %}{{ image_proxify(result.thumbnail_src) }}{% else %}{{ image_proxify(result.img_src) }}{% endif %}" title="{{ result.title|striptags }}" alt="{{ result.title|striptags }}"/></a>
|
||||
<span class="url"><a href="{{ result.url }}" rel="noreferrer" class="small_font">{{ _('original context') }}</a></span>
|
||||
<a href="{{ result.img_src }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}><img src="{% if result.thumbnail_src %}{{ image_proxify(result.thumbnail_src) }}{% else %}{{ image_proxify(result.img_src) }}{% endif %}" title="{{ result.title|striptags }}" alt="{{ result.title|striptags }}"/></a>
|
||||
<span class="url"><a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %} class="small_font">{{ _('original context') }}</a></span>
|
||||
</p>
|
||||
</div>
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
{% endif %}
|
||||
|
||||
<div>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
{% if result.publishedDate %}<span class="published_date">{{ result.publishedDate }}</span>{% endif %}
|
||||
<p class="content">{% if result.content %}{{ result.content|safe }}<br />{% endif %}</p>
|
||||
<p class="url">{{ result.pretty_url }}‎</p>
|
||||
|
|
|
@ -2,12 +2,12 @@
|
|||
{% if "icon_"~result.engine~".ico" in favicons %}
|
||||
<img width="14" height="14" class="favicon" src="{{ url_for('static', filename='img/icons/icon_'+result.engine+'.ico') }}" alt="{{result.engine}}" />
|
||||
{% endif %}
|
||||
<h3 class="result_title"><a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
{% if result.content %}<span class="content">{{ result.content|safe }}</span><br />{% endif %}
|
||||
<span class="stats">{{ _('Seeder') }} : {{ result.seed }}, {{ _('Leecher') }} : {{ result.leech }}</span><br />
|
||||
<span>
|
||||
{% if result.magnetlink %}<a href="{{ result.magnetlink }}" class="magnetlink">{{ _('magnet link') }}</a>{% endif %}
|
||||
{% if result.torrentfile %}<a href="{{ result.torrentfile }}" class="torrentfile" rel="noreferrer">{{ _('torrent file') }}</a>{% endif %}
|
||||
{% if result.torrentfile %}<a href="{{ result.torrentfile }}" class="torrentfile" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ _('torrent file') }}</a>{% endif %}
|
||||
</span>
|
||||
<p class="url">{{ result.pretty_url }}‎</p>
|
||||
</div>
|
||||
|
|
|
@ -3,8 +3,8 @@
|
|||
<img width="14" height="14" class="favicon" src="{{ url_for('static', filename='img/icons/icon_'+result.engine+'.ico') }}" alt="{{result.engine}}" />
|
||||
{% endif %}
|
||||
|
||||
<h3 class="result_title"><a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
{% if result.publishedDate %}<span class="published_date">{{ result.publishedDate }}</span><br />{% endif %}
|
||||
<a href="{{ result.url }}" rel="noreferrer"><img width="400" src="{{ image_proxify(result.thumbnail) }}" title="{{ result.title|striptags }}" alt="{{ result.title|striptags }}"/></a>
|
||||
<a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}><img width="400" src="{{ image_proxify(result.thumbnail) }}" title="{{ result.title|striptags }}" alt="{{ result.title|striptags }}"/></a>
|
||||
<p class="url">{{ result.pretty_url }}‎</p>
|
||||
</div>
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
{% extends "default/base.html" %}
|
||||
{% block content %}
|
||||
<div class="center">
|
||||
<h1>{{ _('Page not found') }}</h1>
|
||||
{% autoescape false %}
|
||||
<p>{{ _('Go to %(search_page)s.', search_page='<a href="{}">{}</a>'.decode('utf-8').format(url_for('index'), _('search page'))) }}</p>
|
||||
{% endautoescape %}
|
||||
</div>
|
||||
{% endblock %}
|
|
@ -6,20 +6,20 @@
|
|||
|
||||
<p>Searx is a <a href="https://en.wikipedia.org/wiki/Metasearch_engine">metasearch engine</a>, aggregating the results of other <a href="{{ url_for('preferences') }}">search engines</a> while not storing information about its users.
|
||||
</p>
|
||||
<h2>Why use Searx?</h2>
|
||||
<h2>Why use searx?</h2>
|
||||
<ul>
|
||||
<li>Searx may not offer you as personalised results as Google, but it doesn't generate a profile about you</li>
|
||||
<li>Searx doesn't care about what you search for, never shares anything with a third party, and it can't be used to compromise you</li>
|
||||
<li>Searx is free software, the code is 100% open and you can help to make it better. See more on <a href="https://github.com/asciimoo/searx">github</a></li>
|
||||
<li>searx may not offer you as personalised results as Google, but it doesn't generate a profile about you</li>
|
||||
<li>searx doesn't care about what you search for, never shares anything with a third party, and it can't be used to compromise you</li>
|
||||
<li>searx is free software, the code is 100% open and you can help to make it better. See more on <a href="https://github.com/asciimoo/searx">github</a></li>
|
||||
</ul>
|
||||
<p>If you do care about privacy, want to be a conscious user, or otherwise believe
|
||||
in digital freedom, make Searx your default search engine or run it on your own server</p>
|
||||
in digital freedom, make searx your default search engine or run it on your own server</p>
|
||||
|
||||
<h2>Technical details - How does it work?</h2>
|
||||
|
||||
<p>Searx is a <a href="https://en.wikipedia.org/wiki/Metasearch_engine">metasearch engine</a>,
|
||||
inspired by the <a href="http://seeks-project.info/">seeks project</a>.<br />
|
||||
It provides basic privacy by mixing your queries with searches on other platforms without storing search data. Queries are made using a POST request on every browser (except chrome*). Therefore they show up in neither our logs, nor your url history. In case of Chrome* users there is an exception, if Searx used from the search bar it performs GET requests.<br />
|
||||
It provides basic privacy by mixing your queries with searches on other platforms without storing search data. Queries are made using a POST request on every browser (except chrome*). Therefore they show up in neither our logs, nor your url history. In case of Chrome* users there is an exception, if searx used from the search bar it performs GET requests.<br />
|
||||
Searx can be added to your browser's search bar; moreover, it can be set as the default search engine.
|
||||
</p>
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"{% if rtl %} dir="rtl"{% endif %}>
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="description" content="Searx - a privacy-respecting, hackable metasearch engine" />
|
||||
<meta name="description" content="searx - a privacy-respecting, hackable metasearch engine" />
|
||||
<meta name="keywords" content="searx, search, search engine, metasearch, meta search" />
|
||||
<meta name="generator" content="searx/{{ searx_version }}">
|
||||
<meta name="referrer" content="no-referrer">
|
||||
|
|
|
@ -1,18 +1,18 @@
|
|||
<div class="infobox">
|
||||
<h2>{{ infobox.infobox }}</h2>
|
||||
<h2><bdi>{{ infobox.infobox }}</bdi></h2>
|
||||
{% if infobox.img_src %}<img src="{{ image_proxify(infobox.img_src) }}" title="{{ infobox.infobox|striptags }}" alt="{{ infobox.infobox|striptags }}" />{% endif %}
|
||||
<p>{{ infobox.entity }}</p>
|
||||
<p>{{ infobox.content | safe }}</p>
|
||||
<p><bdi>{{ infobox.entity }}</bdi></p>
|
||||
<p><bdi>{{ infobox.content | safe }}</bdi></p>
|
||||
{% if infobox.attributes %}
|
||||
<div class="attributes">
|
||||
<table>
|
||||
{% for attribute in infobox.attributes %}
|
||||
<tr>
|
||||
<td>{{ attribute.label }}</td>
|
||||
<td><bdi>{{ attribute.label }}</bdi></td>
|
||||
{% if attribute.image %}
|
||||
<td><img src="{{ image_proxify(attribute.image.src) }}" alt="{{ attribute.image.alt }}" /></td>
|
||||
{% else %}
|
||||
<td>{{ attribute.value }}</td>
|
||||
<td><bdi>{{ attribute.value }}</bdi></td>
|
||||
{% endif %}
|
||||
</tr>
|
||||
{% endfor %}
|
||||
|
@ -24,7 +24,7 @@
|
|||
<div class="urls">
|
||||
<ul>
|
||||
{% for url in infobox.urls %}
|
||||
<li class="url"><a href="{{ url.url }}" rel="noreferrer">{{ url.title }}</a></li>
|
||||
<li class="url"><bdi><a href="{{ url.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ url.title }}</a></bdi></li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -34,7 +34,7 @@
|
|||
<div class="relatedTopics">
|
||||
{% for topic in infobox.relatedTopics %}
|
||||
<div>
|
||||
<h3>{{ topic.name }}</h3>
|
||||
<h3><bdi>{{ topic.name }}</bdi></h3>
|
||||
{% for suggestion in topic.suggestions %}
|
||||
<form method="{{ method or 'POST' }}" action="{{ url_for('index') }}">
|
||||
<input type="hidden" name="q" value="{{ suggestion }}">
|
||||
|
|
|
@ -80,6 +80,15 @@
|
|||
</select>
|
||||
</p>
|
||||
</fieldset>
|
||||
<fieldset>
|
||||
<legend>{{ _('Results on new tabs') }}</legend>
|
||||
<p>
|
||||
<select name='results_on_new_tab'>
|
||||
<option value="1" {% if results_on_new_tab %}selected="selected"{% endif %}>{{ _('On') }}</option>
|
||||
<option value="0" {% if not results_on_new_tab %}selected="selected"{% endif %}>{{ _('Off')}}</option>
|
||||
</select>
|
||||
</p>
|
||||
</fieldset>
|
||||
<fieldset>
|
||||
<legend>{{ _('Currently used search engines') }}</legend>
|
||||
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
<div class="result {{ result.class }}">
|
||||
<h3 class="result_title"> {% if result['favicon'] %}<img width="14" height="14" class="favicon" src="static/{{theme}}/img/icon_{{result['favicon']}}.ico" alt="{{result['favicon']}}" />{% endif %}<a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<p class="url">{{ result.pretty_url }}‎ <a class="cache_link" href="https://web.archive.org/web/{{ result.url }}" rel="noreferrer">{{ _('cached') }}</a></p>
|
||||
<h3 class="result_title"> {% if result['favicon'] %}<img width="14" height="14" class="favicon" src="static/{{theme}}/img/icon_{{result['favicon']}}.ico" alt="{{result['favicon']}}" />{% endif %}<a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
<p class="url">{{ result.pretty_url }}‎ <a class="cache_link" href="https://web.archive.org/web/{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ _('cached') }}</a></p>
|
||||
{% if result.publishedDate %}<p class="published_date">{{ result.publishedDate }}</p>{% endif %}
|
||||
<p class="content">{% if result.img_src %}<img src="{{ image_proxify(result.img_src) }}" class="image" />{% endif %}{% if result.content %}{{ result.content|safe }}<br class="last"/>{% endif %}</p>
|
||||
{% if result.repository %}<p class="result-content"><a href="{{ result.repository|safe }}" rel="noreferrer">{{ result.repository }}</a></p>{% endif %}
|
||||
{% if result.repository %}<p class="result-content"><a href="{{ result.repository|safe }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.repository }}</a></p>{% endif %}
|
||||
|
||||
<div dir="ltr">
|
||||
{{ result.codelines|code_highlighter(result.code_language)|safe }}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
<div class="result {{ result.class }}">
|
||||
<h3 class="result_title">{% if "icon_"~result.engine~".ico" in favicons %}<img width="14" height="14" class="favicon" src="{{ url_for('static', filename='img/icons/icon_'+result.engine+'.ico') }}" alt="{{result.engine}}" />{% endif %}<a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<p class="url">{{ result.pretty_url }}‎ <a class="cache_link" href="https://web.archive.org/web/{{ result.url }}" rel="noreferrer">{{ _('cached') }}</a>
|
||||
<h3 class="result_title">{% if "icon_"~result.engine~".ico" in favicons %}<img width="14" height="14" class="favicon" src="{{ url_for('static', filename='img/icons/icon_'+result.engine+'.ico') }}" alt="{{result.engine}}" />{% endif %}<a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
<p class="url">{{ result.pretty_url }}‎ <a class="cache_link" href="https://web.archive.org/web/{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ _('cached') }}</a>
|
||||
{% if result.publishedDate %}<span class="published_date">{{ result.publishedDate }}</span>{% endif %}</p>
|
||||
<p class="content">{% if result.img_src %}<img src="{{ image_proxify(result.img_src) }}" class="image" />{% endif %}{% if result.content %}{{ result.content|safe }}<br class="last"/>{% endif %}</p>
|
||||
</div>
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
<div class="image_result">
|
||||
<p>
|
||||
<a href="{{ result.img_src }}" rel="noreferrer"><img src="{% if result.thumbnail_src %}{{ image_proxify(result.thumbnail_src) }}{% else %}{{ image_proxify(result.img_src) }}{% endif %}" title="{{ result.title|striptags }}" alt="{{ result.title|striptags }}" /></a>
|
||||
<span class="url"><a href="{{ result.url }}" rel="noreferrer" class="small_font">{{ _('original context') }}</a></span>
|
||||
<a href="{{ result.img_src }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}><img src="{% if result.thumbnail_src %}{{ image_proxify(result.thumbnail_src) }}{% else %}{{ image_proxify(result.img_src) }}{% endif %}" title="{{ result.title|striptags }}" alt="{{ result.title|striptags }}" /></a>
|
||||
<span class="url"><a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %} class="small_font">{{ _('original context') }}</a></span>
|
||||
</p>
|
||||
</div>
|
||||
|
|
|
@ -5,8 +5,8 @@
|
|||
{% endif %}
|
||||
|
||||
<div>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<p class="url">{{ result.pretty_url }}‎ <a class="cache_link" href="https://web.archive.org/web/{{ result.url }}" rel="noreferrer">{{ _('cached') }}</a>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
<p class="url">{{ result.pretty_url }}‎ <a class="cache_link" href="https://web.archive.org/web/{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ _('cached') }}</a>
|
||||
{% if result.publishedDate %}<span class="published_date">{{ result.publishedDate }}</span>{% endif %}</p>
|
||||
<p class="content">{% if result.img_src %}<img src="{{ image_proxify(result.img_src) }}" class="image" />{% endif %}{% if result.content %}{{ result.content|safe }}<br class="last"/>{% endif %}</p>
|
||||
</div>
|
||||
|
|
|
@ -2,12 +2,12 @@
|
|||
{% if "icon_"~result.engine~".ico" in favicons %}
|
||||
<img width="14" height="14" class="favicon" src="{{ url_for('static', filename='img/icons/icon_'+result.engine+'.ico') }}" alt="{{result.engine}}" />
|
||||
{% endif %}
|
||||
<h3 class="result_title"><a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<h3 class="result_title"><a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
<p class="url">{{ result.pretty_url }}‎</p>
|
||||
{% if result.content %}<p class="content">{{ result.content|safe }}</p>{% endif %}
|
||||
<p>
|
||||
{% if result.magnetlink %}<a href="{{ result.magnetlink }}" class="magnetlink">{{ _('magnet link') }}</a>{% endif %}
|
||||
{% if result.torrentfile %}<a href="{{ result.torrentfile }}" rel="noreferrer" class="torrentfile">{{ _('torrent file') }}</a>{% endif %} -
|
||||
{% if result.torrentfile %}<a href="{{ result.torrentfile }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %} class="torrentfile">{{ _('torrent file') }}</a>{% endif %} -
|
||||
<span class="stats">{{ _('Seeder') }} : {{ result.seed }}, {{ _('Leecher') }} : {{ result.leech }}</span>
|
||||
</p>
|
||||
</div>
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
<div class="result">
|
||||
<h3 class="result_title">{% if "icon_"~result.engine~".ico" in favicons %}<img width="14" height="14" class="favicon" src="{{ url_for('static', filename='img/icons/icon_'+result.engine+'.ico') }}" alt="{{result.engine}}" />{% endif %}<a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h3>
|
||||
<h3 class="result_title">{% if "icon_"~result.engine~".ico" in favicons %}<img width="14" height="14" class="favicon" src="{{ url_for('static', filename='img/icons/icon_'+result.engine+'.ico') }}" alt="{{result.engine}}" />{% endif %}<a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.title|safe }}</a></h3>
|
||||
{% if result.publishedDate %}<span class="published_date">{{ result.publishedDate }}</span><br />{% endif %}
|
||||
<a href="{{ result.url }}" rel="noreferrer"><img class="thumbnail" src="{{ image_proxify(result.thumbnail) }}" title="{{ result.title|striptags }}" alt="{{ result.title|striptags }}"/></a>
|
||||
<a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}><img class="thumbnail" src="{{ image_proxify(result.thumbnail) }}" title="{{ result.title|striptags }}" alt="{{ result.title|striptags }}"/></a>
|
||||
<p class="url">{{ result.url }}‎</p>
|
||||
</div>
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
{% extends "oscar/base.html" %}
|
||||
{% block content %}
|
||||
<div class="text-center">
|
||||
<h1>{{ _('Page not found') }}</h1>
|
||||
{% autoescape false %}
|
||||
<p>{{ _('Go to %(search_page)s.', search_page='<a href="{}">{}</a>'.decode('utf-8').format(url_for('index'), _('search page'))) }}</p>
|
||||
{% endautoescape %}
|
||||
</div>
|
||||
{% endblock %}
|
|
@ -7,27 +7,27 @@
|
|||
|
||||
<p>Searx is a <a href="https://en.wikipedia.org/wiki/Metasearch_engine">metasearch engine</a>, aggregating the results of other <a href="{{ url_for('preferences') }}">search engines</a> while not storing information about its users.
|
||||
</p>
|
||||
<h2>Why use Searx?</h2>
|
||||
<h2>Why use searx?</h2>
|
||||
<ul>
|
||||
<li>Searx may not offer you as personalised results as Google, but it doesn't generate a profile about you</li>
|
||||
<li>Searx doesn't care about what you search for, never shares anything with a third party, and it can't be used to compromise you</li>
|
||||
<li>Searx is free software, the code is 100% open and you can help to make it better. See more on <a href="https://github.com/asciimoo/searx">github</a></li>
|
||||
<li>searx may not offer you as personalised results as Google, but it doesn't generate a profile about you</li>
|
||||
<li>searx doesn't care about what you search for, never shares anything with a third party, and it can't be used to compromise you</li>
|
||||
<li>searx is free software, the code is 100% open and you can help to make it better. See more on <a href="https://github.com/asciimoo/searx">github</a></li>
|
||||
</ul>
|
||||
<p>If you do care about privacy, want to be a conscious user, or otherwise believe
|
||||
in digital freedom, make Searx your default search engine or run it on your own server</p>
|
||||
in digital freedom, make searx your default search engine or run it on your own server</p>
|
||||
|
||||
<h2>Technical details - How does it work?</h2>
|
||||
|
||||
<p>Searx is a <a href="https://en.wikipedia.org/wiki/Metasearch_engine">metasearch engine</a>,
|
||||
inspired by the <a href="http://seeks-project.info/">seeks project</a>.<br />
|
||||
It provides basic privacy by mixing your queries with searches on other platforms without storing search data. Queries are made using a POST request on every browser (except chrome*). Therefore they show up in neither our logs, nor your url history. In case of Chrome* users there is an exception, Searx uses the search bar to perform GET requests.<br />
|
||||
It provides basic privacy by mixing your queries with searches on other platforms without storing search data. Queries are made using a POST request on every browser (except chrome*). Therefore they show up in neither our logs, nor your url history. In case of Chrome* users there is an exception, searx uses the search bar to perform GET requests.<br />
|
||||
Searx can be added to your browser's search bar; moreover, it can be set as the default search engine.
|
||||
</p>
|
||||
|
||||
<h2>How can I make it my own?</h2>
|
||||
|
||||
<p>Searx appreciates your concern regarding logs, so take the <a href="https://github.com/asciimoo/searx">code</a> and run it yourself! <br />Add your Searx to this <a href="https://github.com/asciimoo/searx/wiki/Searx-instances">list</a> to help other people reclaim their privacy and make the Internet freer!
|
||||
<br />The more decentralized the Internet, is the more freedom we have!</p>
|
||||
<br />The more decentralized the Internet is, the more freedom we have!</p>
|
||||
|
||||
|
||||
<h2>More about searx</h2>
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
<input type="checkbox" name="advanced_search" id="check-advanced" {% if advanced_search %} checked="checked"{% endif %}>
|
||||
<label for="check-advanced">
|
||||
<span class="glyphicon glyphicon-cog"></span>
|
||||
{{ _('Advanced settings') }}
|
||||
</label>
|
||||
<div id="advanced-search-container">
|
||||
{% include 'oscar/categories.html' %}
|
||||
{% include 'oscar/time-range.html' %}
|
||||
</div>
|
|
@ -2,7 +2,7 @@
|
|||
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"{% if rtl %} dir="rtl"{% endif %}>
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="description" content="Searx - a privacy-respecting, hackable metasearch engine" />
|
||||
<meta name="description" content="searx - a privacy-respecting, hackable metasearch engine" />
|
||||
<meta name="keywords" content="searx, search, search engine, metasearch, meta search" />
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="generator" content="searx/{{ searx_version }}">
|
||||
|
@ -90,8 +90,5 @@
|
|||
{% for script in scripts %}
|
||||
<script src="{{ url_for('static', filename=script) }}"></script>
|
||||
{% endfor %}
|
||||
<script type="text/javascript">
|
||||
$(function() { $('a[data-toggle="modal"]').attr('href', '#'); });
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
@ -1,42 +1,14 @@
|
|||
<!-- used if scripts are disabled -->
|
||||
<noscript>
|
||||
<div id="categories" class="btn-group btn-toggle">
|
||||
<div id="categories">
|
||||
{% if rtl %}
|
||||
{% for category in categories | reverse %}
|
||||
<!--<div class="checkbox">-->
|
||||
<input class="hidden" type="checkbox" id="checkbox_{{ category|replace(' ', '_') }}_nojs" name="category_{{ category }}" {% if category in selected_categories %}checked="checked"{% endif %} />
|
||||
<label class="btn btn-sm btn-primary active label_hide_if_not_checked" for="checkbox_{{ category|replace(' ', '_') }}_nojs">{{ _(category) }}</label>
|
||||
<label class="btn btn-sm btn-default label_hide_if_checked" for="checkbox_{{ category|replace(' ', '_') }}_nojs">{{ _(category) }}</label>
|
||||
<!--</div>-->
|
||||
{% if category in selected_categories %}<input class="hidden" type="checkbox" id="checkbox_{{ category|replace(' ', '_') }}_dis_activation" name="category_{{ category }}" value="off" checked="checked"/>{% endif %}
|
||||
{% endfor %}
|
||||
{% for category in categories | reverse %}
|
||||
<input class="hidden" type="checkbox" id="checkbox_{{ category|replace(' ', '_') }}" name="category_{{ category }}" {% if category in selected_categories %}checked="checked"{% endif %} />
|
||||
<label for="checkbox_{{ category|replace(' ', '_') }}">{{ _(category) }}</label>
|
||||
</label>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
{% for category in categories %}
|
||||
<!--<div class="checkbox">-->
|
||||
<input class="hidden" type="checkbox" id="checkbox_{{ category|replace(' ', '_') }}_nojs" name="category_{{ category }}" {% if category in selected_categories %}checked="checked"{% endif %} />
|
||||
<label class="btn btn-sm btn-primary active label_hide_if_not_checked" for="checkbox_{{ category|replace(' ', '_') }}_nojs">{{ _(category) }}</label>
|
||||
<label class="btn btn-sm btn-default label_hide_if_checked" for="checkbox_{{ category|replace(' ', '_') }}_nojs">{{ _(category) }}</label>
|
||||
<!--</div>-->
|
||||
{% if category in selected_categories %}<input class="hidden" type="checkbox" id="checkbox_{{ category|replace(' ', '_') }}_dis_activation" name="category_{{ category }}" value="off" checked="checked"/>{% endif %}
|
||||
{% endfor %}
|
||||
{% for category in categories %}
|
||||
<input class="hidden" type="checkbox" id="checkbox_{{ category|replace(' ', '_') }}" name="category_{{ category }}" {% if category in selected_categories %}checked="checked"{% endif %} />
|
||||
<label for="checkbox_{{ category|replace(' ', '_') }}">{{ _(category) }}</label>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</div>
|
||||
</noscript>
|
||||
|
||||
<div id="categories" class="btn-group btn-toggle hide_if_nojs" data-toggle="buttons">
|
||||
{% if rtl %}
|
||||
{% for category in categories | reverse %}
|
||||
<label class="btn btn-sm {% if category in selected_categories %}btn-primary active{% else %}btn-default{% endif %}" data-btn-class="primary">
|
||||
<input class="hidden" type="checkbox" id="checkbox_{{ category|replace(' ', '_') }}" name="category_{{ category }}" {% if category in selected_categories %}checked="checked"{% endif %} />{{ _(category) }}
|
||||
</label>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
{% for category in categories %}
|
||||
<label class="btn btn-sm {% if category in selected_categories %}btn-primary active{% else %}btn-default{% endif %}" data-btn-class="primary">
|
||||
<input class="hidden" type="checkbox" id="checkbox_{{ category|replace(' ', '_') }}" name="category_{{ category }}" {% if category in selected_categories %}checked="checked"{% endif %} />{{ _(category) }}
|
||||
</label>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
|
||||
|
|
|
@ -1,21 +1,21 @@
|
|||
{% from 'oscar/macros.html' import result_link with context %}
|
||||
<div class="panel panel-default infobox">
|
||||
<div class="panel-heading">
|
||||
<bdi><h4 class="panel-title infobox_part">{{ infobox.infobox }}</h4></bdi>
|
||||
<h4 class="panel-title infobox_part"><bdi>{{ infobox.infobox }}</bdi></h4>
|
||||
</div>
|
||||
<div class="panel-body">
|
||||
<bdi>
|
||||
{% if infobox.img_src %}<img class="img-responsive center-block infobox_part" src="{{ image_proxify(infobox.img_src) }}" alt="{{ infobox.infobox }}" />{% endif %}
|
||||
{% if infobox.content %}<p class="infobox_part">{{ infobox.content }}</p>{% endif %}
|
||||
{% if infobox.content %}<bdi><p class="infobox_part">{{ infobox.content }}</bdi></p>{% endif %}
|
||||
|
||||
{% if infobox.attributes %}
|
||||
<table class="table table-striped infobox_part">
|
||||
{% for attribute in infobox.attributes %}
|
||||
<tr>
|
||||
<td>{{ attribute.label }}</td>
|
||||
<td><bdi>{{ attribute.label }}</bdi></td>
|
||||
{% if attribute.image %}
|
||||
<td><img class="img-responsive" src="{{ image_proxify(attribute.image.src) }}" alt="{{ attribute.image.alt }}" /></td>
|
||||
{% else %}
|
||||
<td>{{ attribute.value }}</td>
|
||||
<td><bdi>{{ attribute.value }}</bdi></td>
|
||||
{% endif %}
|
||||
</tr>
|
||||
{% endfor %}
|
||||
|
@ -24,11 +24,12 @@
|
|||
|
||||
{% if infobox.urls %}
|
||||
<div class="infobox_part">
|
||||
<bdi>
|
||||
{% for url in infobox.urls %}
|
||||
<p class="btn btn-default btn-xs"><a href="{{ url.url }}" rel="noreferrer">{{ url.title }}</a></p>
|
||||
<p class="btn btn-default btn-xs">{{ result_link(url.url, url.title) }}</a></p>
|
||||
{% endfor %}
|
||||
</bdi>
|
||||
</div>
|
||||
{% endif %}
|
||||
</bdi>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -9,16 +9,20 @@
|
|||
<img width="32" height="32" class="favicon" src="static/themes/oscar/img/icons/{{ favicon }}.png" alt="{{ favicon }}" />
|
||||
{%- endmacro %}
|
||||
|
||||
{%- macro result_link(url, title, classes='') -%}
|
||||
<a href="{{ url }}" {% if classes %}class="{{ classes }} "{% endif %}{% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ title }}</a>
|
||||
{%- endmacro -%}
|
||||
|
||||
<!-- Draw result header -->
|
||||
{% macro result_header(result, favicons) -%}
|
||||
<h4 class="result_header">{% if result.engine~".png" in favicons %}{{ draw_favicon(result.engine) }} {% endif %}<a href="{{ result.url }}" rel="noreferrer">{{ result.title|safe }}</a></h4>
|
||||
{% macro result_header(result, favicons) -%}
|
||||
<h4 class="result_header">{% if result.engine~".png" in favicons %}{{ draw_favicon(result.engine) }} {% endif %}{{ result_link(result.url, result.title|safe) }}</h4>
|
||||
{%- endmacro %}
|
||||
|
||||
<!-- Draw result sub header -->
|
||||
{% macro result_sub_header(result) -%}
|
||||
{% if result.publishedDate %}<time class="text-muted" datetime="{{ result.pubdate }}" >{{ result.publishedDate }}</time>{% endif %}
|
||||
{% if result.magnetlink %}<small> • <a href="{{ result.magnetlink }}" class="magnetlink">{{ icon('magnet') }} {{ _('magnet link') }}</a></small>{% endif %}
|
||||
{% if result.torrentfile %}<small> • <a href="{{ result.torrentfile }}" class="torrentfile" rel="noreferrer">{{ icon('download-alt') }} {{ _('torrent file') }}</a></small>{% endif %}
|
||||
{% if result.magnetlink %}<small> • {{ result_link(result.magnetlink, icon('magnet') + _('magnet link'), "magnetlink") }}</small>{% endif %}
|
||||
{% if result.torrentfile %}<small> • {{ result_link(result.torrentfile, icon('download-alt') + _('torrent file'), "torrentfile") }}</small>{% endif %}
|
||||
{%- endmacro %}
|
||||
|
||||
<!-- Draw result footer -->
|
||||
|
@ -28,7 +32,7 @@
|
|||
{% for engine in result.engines %}
|
||||
<span class="label label-default">{{ engine }}</span>
|
||||
{% endfor %}
|
||||
<small><a class="text-info" href="https://web.archive.org/web/{{ result.url }}" rel="noreferrer">{{ icon('link') }} {{ _('cached') }}</a></small>
|
||||
<small>{{ result_link("https://web.archive.org/web/" + result.url, icon('link') + _('cached'), "text-info") }}</small>
|
||||
</div>
|
||||
<div class="text-muted"><small>{{ result.pretty_url }}</small></div>
|
||||
{%- endmacro %}
|
||||
|
@ -39,7 +43,7 @@
|
|||
{% for engine in result.engines %}
|
||||
<span class="label label-default">{{ engine }}</span>
|
||||
{% endfor %}
|
||||
<small><a class="text-info" href="https://web.archive.org/web/{{ result.url }}" rel="noreferrer">{{ icon('link') }} {{ _('cached') }}</a></small>
|
||||
<small>{{ result_link("https://web.archive.org/web/" + result.url, icon('link') + _('cached'), "text-info") }}</small>
|
||||
<div class="text-muted"><small>{{ result.pretty_url }}</small></div>
|
||||
{%- endmacro %}
|
||||
|
||||
|
@ -68,9 +72,11 @@
|
|||
{%- endmacro %}
|
||||
|
||||
{% macro checkbox_toggle(id, blocked) -%}
|
||||
<div class="checkbox">
|
||||
<input class="hidden" type="checkbox" id="{{ id }}" name="{{ id }}"{% if blocked %} checked="checked"{% endif %} />
|
||||
<label class="btn btn-success label_hide_if_checked" for="{{ id }}">{{ _('Block') }}</label>
|
||||
<label class="btn btn-danger label_hide_if_not_checked" for="{{ id }}">{{ _('Allow') }}</label>
|
||||
<div class="onoffswitch">
|
||||
<input type="checkbox" id="{{ id }}" name="{{ id }}"{% if blocked %} checked="checked"{% endif %} class="onoffswitch-checkbox">
|
||||
<label class="onoffswitch-label" for="{{ id }}">
|
||||
<span class="onoffswitch-inner"></span>
|
||||
<span class="onoffswitch-switch"></span>
|
||||
</label>
|
||||
</div>
|
||||
{%- endmacro %}
|
||||
|
|
|
@ -36,7 +36,7 @@
|
|||
<label class="col-sm-3 col-md-2">{{ _('Default categories') }}</label>
|
||||
{% else %}
|
||||
<label class="col-sm-3 col-md-2">{{ _('Default categories') }}</label>
|
||||
<div class="col-sm-11 col-md-10">
|
||||
<div class="col-sm-11 col-md-10 search-categories">
|
||||
{% include 'oscar/categories.html' %}
|
||||
</div>
|
||||
{% endif %}
|
||||
|
@ -117,6 +117,15 @@
|
|||
<option value="pointhi" {% if cookies['oscar-style'] == 'pointhi' %}selected="selected"{% endif %}>Pointhi</option>
|
||||
</select>
|
||||
{{ preferences_item_footer(_('Choose style for this theme'), _('Style'), rtl) }}
|
||||
|
||||
{% set label = _('Results on new tabs') %}
|
||||
{% set info = _('Open result links on new browser tabs') %}
|
||||
{{ preferences_item_header(info, label, rtl) }}
|
||||
<select class="form-control" name='results_on_new_tab'>
|
||||
<option value="1" {% if results_on_new_tab %}selected="selected"{% endif %}>{{ _('On') }}</option>
|
||||
<option value="0" {% if not results_on_new_tab %}selected="selected"{% endif %}>{{ _('Off')}}</option>
|
||||
</select>
|
||||
{{ preferences_item_footer(info, label, rtl) }}
|
||||
</div>
|
||||
</fieldset>
|
||||
</div>
|
||||
|
@ -164,7 +173,9 @@
|
|||
{% if not search_engine.private %}
|
||||
<tr>
|
||||
{% if not rtl %}
|
||||
<td>{{ checkbox_toggle('engine_' + search_engine.name|replace(' ', '_') + '__' + categ|replace(' ', '_'), (search_engine.name, categ) in disabled_engines) }}</td>
|
||||
<td class="onoff-checkbox">
|
||||
{{ checkbox_toggle('engine_' + search_engine.name|replace(' ', '_') + '__' + categ|replace(' ', '_'), (search_engine.name, categ) in disabled_engines) }}
|
||||
</td>
|
||||
<th>{{ search_engine.name }}</th>
|
||||
<td>{{ shortcuts[search_engine.name] }}</td>
|
||||
<td><input type="checkbox" {{ "checked" if search_engine.safesearch==True else ""}} readonly="readonly" disabled="disabled"></td>
|
||||
|
@ -176,7 +187,9 @@
|
|||
<td><input type="checkbox" {{ "checked" if search_engine.safesearch==True else ""}} readonly="readonly" disabled="disabled"></td>
|
||||
<td>{{ shortcuts[search_engine.name] }}</td>
|
||||
<th>{{ search_engine.name }}</th>
|
||||
<td>{{ checkbox_toggle('engine_' + search_engine.name|replace(' ', '_') + '__' + categ|replace(' ', '_'), (search_engine.name, categ) in disabled_engines) }}</td>
|
||||
<td class="onoff-checkbox">
|
||||
{{ checkbox_toggle('engine_' + search_engine.name|replace(' ', '_') + '__' + categ|replace(' ', '_'), (search_engine.name, categ) in disabled_engines) }}
|
||||
</td>
|
||||
{% endif %}
|
||||
</tr>
|
||||
{% endif %}
|
||||
|
@ -203,7 +216,9 @@
|
|||
<div class="panel-body">
|
||||
<div class="col-xs-6 col-sm-4 col-md-6">{{ _(plugin.description) }}</div>
|
||||
<div class="col-xs-6 col-sm-4 col-md-6">
|
||||
<div class="onoff-checkbox">
|
||||
{{ checkbox_toggle('plugin_' + plugin.id, plugin.id not in allowed_plugins) }}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
{% if result.content %}<p class="result-content">{{ result.content|safe }}</p>{% endif %}
|
||||
|
||||
{% if result.repository %}<p class="result-content">{{ icon('file') }} <a href="{{ result.repository|safe }}" rel="noreferrer">{{ result.repository }}</a></p>{% endif %}
|
||||
{% if result.repository %}<p class="result-content">{{ icon('file') }} <a href="{{ result.repository }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}>{{ result.repository }}</a></p>{% endif %}
|
||||
|
||||
<div dir="ltr">
|
||||
{{ result.codelines|code_highlighter(result.code_language)|safe }}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
{% from 'oscar/macros.html' import result_header, result_sub_header, result_footer, result_footer_rtl, icon %}
|
||||
{% from 'oscar/macros.html' import result_header, result_sub_header, result_footer, result_footer_rtl, icon with context %}
|
||||
|
||||
{{ result_header(result, favicons) }}
|
||||
{{ result_sub_header(result) }}
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
{% from 'oscar/macros.html' import draw_favicon %}
|
||||
|
||||
<a href="{{ result.img_src }}" rel="noreferrer" data-toggle="modal" data-target="#modal-{{ index }}">
|
||||
<a href="{{ result.img_src }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %} data-toggle="modal" data-target="#modal-{{ index }}">
|
||||
<img src="{% if result.thumbnail_src %}{{ image_proxify(result.thumbnail_src) }}{% else %}{{ image_proxify(result.img_src) }}{% endif %}" alt="{{ result.title|striptags }}" title="{{ result.title|striptags }}" class="img-thumbnail">
|
||||
</a>
|
||||
|
||||
<div class="modal fade" id="modal-{{ index }}" tabindex="-1" role="dialog" aria-hidden="true">
|
||||
<div class="modal-dialog">
|
||||
<div class="modal-content">
|
||||
<div class="modal-wrapper">
|
||||
<div class="modal-header">
|
||||
<button type="button" class="close" data-dismiss="modal"><span aria-hidden="true">×</span><span class="sr-only">Close</span></button>
|
||||
<h4 class="modal-title">{% if result.engine~".png" in favicons %}{{ draw_favicon(result.engine) }} {% endif %}{{ result.title|striptags }}</h4>
|
||||
|
@ -20,8 +20,14 @@
|
|||
<span class="label label-default pull-right">{{ result.engine }}</span>
|
||||
<p class="text-muted pull-left">{{ result.pretty_url }}</p>
|
||||
<div class="clearfix"></div>
|
||||
<a href="{{ result.img_src }}" rel="noreferrer" class="btn btn-default">{{ _('Get image') }}</a>
|
||||
<a href="{{ result.url }}" rel="noreferrer" class="btn btn-default">{{ _('View source') }}</a>
|
||||
<div class="row">
|
||||
<div class="col-md-6">
|
||||
<a href="{{ result.img_src }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %} class="btn btn-default">{{ _('Get image') }}</a>
|
||||
</div>
|
||||
<div class="col-md-6">
|
||||
<a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %} class="btn btn-default">{{ _('View source') }}</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
|
||||
<div class="container-fluid">
|
||||
<div class="row">
|
||||
<a href="{{ result.url }}" rel="noreferrer"><img class="thumbnail col-xs-6 col-sm-4 col-md-4 result-content" src="{{ image_proxify(result.thumbnail) }}" alt="{{ result.title|striptags }} {{ result.engine }}" /></a>
|
||||
<a href="{{ result.url }}" {% if results_on_new_tab %}target="_blank" rel="noopener noreferrer"{% else %}rel="noreferrer"{% endif %}><img class="thumbnail col-xs-6 col-sm-4 col-md-4 result-content" src="{{ image_proxify(result.thumbnail) }}" alt="{{ result.title|striptags }} {{ result.engine }}" /></a>
|
||||
{% if result.content %}<p class="col-xs-12 col-sm-8 col-md-8 result-content">{{ result.content|safe }}</p>{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{% extends "oscar/base.html" %}
|
||||
{% block title %}{{ q }} - {% endblock %}
|
||||
{% block meta %}<link rel="alternate" type="application/rss+xml" title="Searx search: {{ q }}" href="{{ url_for('index') }}?q={{ q|urlencode }}&format=rss&{% for category in selected_categories %}category_{{ category }}=1&{% endfor %}pageno={{ pageno }}">{% endblock %}
|
||||
{% block meta %}<link rel="alternate" type="application/rss+xml" title="Searx search: {{ q }}" href="{{ url_for('index') }}?q={{ q|urlencode }}&format=rss&{% for category in selected_categories %}category_{{ category }}=1&{% endfor %}pageno={{ pageno }}&time_range={{ time_range }}">{% endblock %}
|
||||
{% block content %}
|
||||
<div class="row">
|
||||
<div class="col-sm-8" id="main_results">
|
||||
|
@ -41,6 +41,7 @@
|
|||
{% for category in selected_categories %}<input type="hidden" name="category_{{ category }}" value="1"/>{% endfor %}
|
||||
<input type="hidden" name="q" value="{{ q }}" />
|
||||
<input type="hidden" name="pageno" value="{{ pageno+1 }}" />
|
||||
<input type="hidden" name="time_range" value="{{ time_range }}" />
|
||||
<button type="submit" class="btn btn-default"><span class="glyphicon glyphicon-backward"></span> {{ _('next page') }}</button>
|
||||
</form>
|
||||
</div>
|
||||
|
@ -48,6 +49,7 @@
|
|||
<form method="{{ method or 'POST' }}" action="{{ url_for('index') }}" class="pull-left">
|
||||
{% for category in selected_categories %}<input type="hidden" name="category_{{ category }}" value="1"/>{% endfor %}
|
||||
<input type="hidden" name="pageno" value="{{ pageno-1 }}" />
|
||||
<input type="hidden" name="time_range" value="{{ time_range }}" />
|
||||
<button type="submit" class="btn btn-default" {% if pageno == 1 %}disabled{% endif %}><span class="glyphicon glyphicon-forward"></span> {{ _('previous page') }}</button>
|
||||
</form>
|
||||
</div>
|
||||
|
@ -60,6 +62,7 @@
|
|||
<input type="hidden" name="q" value="{{ q }}" />
|
||||
{% for category in selected_categories %}<input type="hidden" name="category_{{ category }}" value="1"/>{% endfor %}
|
||||
<input type="hidden" name="pageno" value="{{ pageno-1 }}" />
|
||||
<input type="hidden" name="time_range" value="{{ time_range }}" />
|
||||
<button type="submit" class="btn btn-default" {% if pageno == 1 %}disabled{% endif %}><span class="glyphicon glyphicon-backward"></span> {{ _('previous page') }}</button>
|
||||
</form>
|
||||
</div>
|
||||
|
@ -68,6 +71,7 @@
|
|||
{% for category in selected_categories %}<input type="hidden" name="category_{{ category }}" value="1"/>{% endfor %}
|
||||
<input type="hidden" name="q" value="{{ q }}" />
|
||||
<input type="hidden" name="pageno" value="{{ pageno+1 }}" />
|
||||
<input type="hidden" name="time_range" value="{{ time_range }}" />
|
||||
<button type="submit" class="btn btn-default"><span class="glyphicon glyphicon-forward"></span> {{ _('next page') }}</button>
|
||||
</form>
|
||||
</div>
|
||||
|
@ -118,7 +122,7 @@
|
|||
<form role="form">
|
||||
<div class="form-group">
|
||||
<label for="search_url">{{ _('Search URL') }}</label>
|
||||
<input id="search_url" type="url" class="form-control select-all-on-click cursor-text" name="search_url" value="{{ base_url }}?q={{ q|urlencode }}{% if selected_categories %}&categories={{ selected_categories|join(",") | replace(' ','+') }}{% endif %}{% if pageno > 1 %}&pageno={{ pageno }}{% endif %}" readonly>
|
||||
<input id="search_url" type="url" class="form-control select-all-on-click cursor-text" name="search_url" value="{{ base_url }}?q={{ q|urlencode }}{% if selected_categories %}&categories={{ selected_categories|join(",") | replace(' ','+') }}{% endif %}{% if pageno > 1 %}&pageno={{ pageno }}{% endif %}{% if time_range %}&time_range={{ time_range }}{% endif %}" readonly>
|
||||
</div>
|
||||
</form>
|
||||
|
||||
|
@ -130,6 +134,7 @@
|
|||
<input type="hidden" name="format" value="{{ output_type }}">
|
||||
{% for category in selected_categories %}<input type="hidden" name="category_{{ category }}" value="1">{% endfor %}
|
||||
<input type="hidden" name="pageno" value="{{ pageno }}">
|
||||
<input type="hidden" name="time_range" value="{{ time_range }}" />
|
||||
<button type="submit" class="btn btn-default">{{ output_type }}</button>
|
||||
</form>
|
||||
{% endfor %}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue