0

I'm currently using Neocities to build my website. The reason for this is because it's one of the few sites I've found where there's no limited website builder and you get to code the website yourself, and the server costs are paid for you. The downside to this is that the site deliberately does not support PHP; I haven't been given a clear answer as to why, but I digress.

I'd like to build a search bar that searches my site's HTML files. I assume that's not hard to do without PHP; the files are already there, it just needs to search for them. Problem is, if it's not done through PHP, it's done though Javascript, which I don't know. I was wondering if someone could guide my through implementing this into my website. I'd prefer not to use engines like Google Custom Search as they put their watermark in the bar and it looks very unprofessional.

grizzthedj
  • 7,131
  • 16
  • 42
  • 62
  • 1
    You can't search through files using normal javascript. You have to manually write an http request for that, otherwise you use some sort of ajax library – Abana Clara Feb 21 '18 at 07:55
  • "the files are already there"...on the server, but JavScript runs in the browser and only knows about the page it's on (which gets downloaded to the rowser on the user's device, where the JavaScript executes). It does not know anything about the pages on the (now remote) server, and cannot unless it makes a HTTP request (e.g. via ajax) to the server to find out what the files are, and then another request to download and examine each file it wants to search through. That will be very slow inefficient. You need a server-side script (such as PHP or ASP.NET) for this. – ADyson Feb 21 '18 at 08:03
  • JavaScript is client side so you either need a json object that's imported that has references to each of the different pages or a PHP page has to be written to handle checking which pages are available. – Hive7 Feb 21 '18 at 10:57
  • How would I go about doing the JSON object? –  Feb 21 '18 at 22:14
  • Ups, a semicolon that shouldn't be there: {"title": "This is the title", "body": "this is the body-text", "url": "example.com/this-is-the-url-to-the-page.html"}. And you need an array of objects (one object for each page). key/value for all the info you want to make searchable. – Espen Klem Feb 28 '18 at 10:42

2 Answers2

0

Search is usually done in the backend with a database. If you really need search you will need a backend first of all. The alternative could be to write a keyword->page mapping in a JavaScript file, but that is not very dynamic and will not match many search words.

William
  • 741
  • 4
  • 9
0

You have at least a couple of alternatives: Lunr and search-index. They both can run in the browser.

I'm doing some work on search-index, so that is the one I know the most about, but I think both of them are good for your use case.

As some of the others say, you need to solve how add the content of all the pages into one search. I guess the easiest is if you can have your content as static pages and as an array of JSON objects that is fed to the search engine. Then you can have the search index stored in leveldb (Chrome) or indexeddb (Firefox) until the next time a recurring user looks at the website.

If you have the link to the website and want to try out a solution like this I'm in the process of making a library for it.

So far I got the basics of the crawler running. It means you have to give the script all the URLs you want to crawl, but for a small site that's manageable. I'll make it find and crawl url's later.

const cheerio = require('cheerio')
url = 'https://example.com/a-page-to-crawl.html'
fetch(url)
  .then(function(response) {
    return response.text()
  })
  .then(function(html) {
    const $ = cheerio.load(html)
    var title = $('title').text()
    var body = $('body').text()
    var item = {url: url, title: title, body: body}
    console.log(item)
})

The file needs to be browserified with the brfs plugin:

browserify -t brfs main.js --debug -o bundle.js
Espen Klem
  • 331
  • 2
  • 15