[OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file
Dave Taht
dave.taht at gmail.com
Sat Dec 31 11:39:35 EST 2016
On Sat, Dec 31, 2016 at 12:15 AM, TheWerthFam <thewerthfam at gmail.com> wrote:
> Quick report -
> So I didn't test pihole per say, but used that method of storing the
> blacklist into the hosts file for dnsmasq to use. Dnsmasq must use a
> different storage method for its hosts file. I loaded 850439 entries in the
> hosts file and restarted dnsmasq. I uses 1/2 as much memory than if loaded
> as a conf-file like adblock does. And its super fast and virtually non
> existent cpu usage. DNS lookups perform just like it should. Though the
> hosts file is now returning an IP address I specified for the blocked hosts
> - would have been nice to do the nxdomain. Think this will work for my
> needs, I can put a second IP address on the router and run pixelserv on it
> or something like that.
Good to know. I'm still interested in finding more
"read-only-thus-discardable data" methods for protecting home networks
and routers, this for example:
https://plus.google.com/u/0/107942175615993706558/posts/635rm12isPq?sfc=true
> Cheers
> Derek
>
>
>
> On 12/29/2016 11:11 AM, Dave Taht wrote:
>>
>> On Thu, Dec 29, 2016 at 8:09 AM, TheWerthFam <thewerthfam at gmail.com>
>> wrote:
>>>
>>> Right now I'd rather not customize the code. There are two directions
>>> I'm
>>> going to try first.
>>> Give unbound a try to serve DNS, keeping Dnsmasq for DHCP. If that
>>> doesn't
>>> work try converting the list to a hosts file pointing to a local pixelsrv
>>> address. There are some other blog posts that indicate that the hosts
>>> file
>>> can handle a lot more entries. Like https://github.com/pi-hole/pi-hole
>>> Maybe just run pi-hole on openwrt.
>>
>> Well, I've had a bit of fun feeding large blocklists into cmph. Using
>> the "chd" algorithm, it creates an index file from a 24MB blocklist
>> into a 800K one. (but you still need the original data and a secondary
>> index) I also fiddled a bit with bloom filters, which strike me as
>> appropo. It seems feasible to establish a large dataset of read-only
>> data with a fast index (that can be discarded in low memory
>> situations, rather than swapped out)
>>
>> I'll take a look at pi-hole...
>>
>>> Cheers
>>> Derek
>>>
>>>
>>> On 12/28/2016 02:21 PM, Dave Taht wrote:
>>>>
>>>> On Tue, Dec 27, 2016 at 11:03 PM, TheWerthFam <thewerthfam at gmail.com>
>>>> wrote:
>>>>>
>>>>> Thanks for the feedback, I'll look into NFQUEUE. I'm forcing the use
>>>>> of
>>>>> my
>>>>> dns by iptables. I'm also using a transparent squid and e2guardian to
>>>>> filter content. I like the idea of the dns based blacklist to add some
>>>>> filtering capabilities since I don't want to try and filter https types
>>>>> sites. I know no solution in perfect.
>>>>
>>>> I've been thinking about this, and given the large amount of active
>>>> data in a very small memory space have been thinking that another
>>>> approach would be more fruitful. Convert the giant table into a
>>>> "minimally perfect hash", and mmap it into memory read-only, so it can
>>>> be discarded under memory pressure, unlike ipset, squid, or dnsmasq
>>>> based approaches.
>>>>
>>>>
>>>>> Cheers
>>>>> Derek
>>>>>
>>>>>
>>>>>
>>>>> On 12/27/2016 01:53 PM, philipp_subx at redfish-solutions.com wrote:
>>>>>>>
>>>>>>> On Dec 26, 2016, at 10:32 AM, TheWerthFam <thewerthfam at gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Using the adblock set of scripts to block malware and porn sites. The
>>>>>>> porn sites list is 800,000 entries, about 10x the number of sites
>>>>>>> adblock
>>>>>>> normally uses. With the full list of malware and porn domains
>>>>>>> loaded,
>>>>>>> dnsmasq takes 115M of memory and normally sits around 50% CPU usage
>>>>>>> with
>>>>>>> moderate browsing usage. CPU and RAM usage isn't really a problem
>>>>>>> other
>>>>>>> than lookups are slow now. Platform is cc 15.05.1 r49389 on banana pi
>>>>>>> r1.
>>>>>>>
>>>>>>> The adblock script takes the different lists, creates files in
>>>>>>> /tmp/dnsmasq.d/ entries looking like
>>>>>>> local=/domainnottogoto.com/ one entry per line. The goal is to
>>>>>>> return
>>>>>>> NXDOMAIN to entries in the lists. Lists are sorted and with unique
>>>>>>> entries.
>>>>>>>
>>>>>>> I've tried increasing the cachesize to 10,000 but that made no
>>>>>>> change.
>>>>>>> Tried neg-ttl=3600 with default negative caching enabled with no
>>>>>>> change.
>>>>>>>
>>>>>>> Are there dnsmasq setting that will improve the performance? or
>>>>>>> should
>>>>>>> it be configured differently to achieve this goal?
>>>>>>> Perhaps unbound would be better suited?
>>>>>>>
>>>>>>> Cheers
>>>>>>> Derek
>>>>>>
>>>>>>
>>>>>> Not to rain on your parade, but the obvious defeat of this solution
>>>>>> would
>>>>>> be to point to an external website which does DNS lookups for you, and
>>>>>> then
>>>>>> edit the URL to have an IP address in place of the host name.
>>>>>>
>>>>>> I would use netfilter’s NFQUEUE and make a user-space decision based
>>>>>> on
>>>>>> packet-destination (since it seems you’re filtering outbound traffic
>>>>>> requests).
>>>>>>
>>>>>> After all, it’s not the NAME you don’t want to talk to… it’s the HOST
>>>>>> that
>>>>>> bears that NAME.
>>>>>>
>>>>>> -Philip
>>>>>>
>>>>> _______________________________________________
>>>>> openwrt-devel mailing list
>>>>> openwrt-devel at lists.openwrt.org
>>>>> https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel
>>>>
>>>>
>>>>
>>
>>
>
--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org
_______________________________________________
openwrt-devel mailing list
openwrt-devel at lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel
More information about the openwrt-devel
mailing list