Scrapli Netconf

If you’ve been working with NETCONF and are in the Python ecosystem, then you have almost certainly heard of, or even been using, the ncclient library. This has more or less been the defacto Python NETCONF client for… as long as I’ve been writing Python things any meaningful capacity, and realistically quite a bit before that! (Wow, going back the initial commit was April 2009!!!!)

I am not a fan of the ncclient library. This is an opinion only, so if you disagree no worries! ncclient clearly gets the job done and has for quite some time, however for me, the library is… obtuse? I find the docs lackluster, and the overall structure and design of the library a bit difficult to navigate.

Rather than gripe needlessly, here is a simple example of one of the things I find “obtuse” about ncclient:

>>> from ncclient import manager
>>>
>>> conn = manager.connect(
...     host="172.18.0.11",
...     port=830,
...     username="vrnetlab",
...     password="VR-netlab9",
...     hostkey_verify=False,
...     device_params={'name':'iosxe'}
... )
>>> dir(conn)
['HUGE_TREE_DEFAULT', '_Manager__set_async_mode', '_Manager__set_raise_mode', '_Manager__set_timeout', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_async_mode', '_device_handler', '_huge_tree', '_raise_mode', '_session', '_timeout', 'async_mode', 'channel_id', 'channel_name', 'client_capabilities', 'connected', 'execute', 'huge_tree', 'locked', 'raise_mode', 'scp', 'server_capabilities', 'session', 'session_id', 'take_notification', 'timeout']
>>>

The above snippet starts out easy enough – import the manager object from ncclient, create a connection object and then inspect that object with the dir function. If you’ve worked with NETCONF a bit you’ll know that there are a handful of methods that NETCONF clients/servers implement – things like get, get_config, edit_config etc., yet they are nowhere to be seen on this connection object!

This may be a trivial example to some, but for me this is pretty frustrating. The use of hasattr and getattr in the code base makes inspecting objects and determining what methods or attributes are available to you as a consumer of the library that much more difficult.

I don’t want to write a post about what I don’t like about ncclient, but I figured if I was writing and saying that I wasn’t a fan, I should at least have a tangible example as to why.

In the event you have similar feelings about ncclient as I do, I would like to introduce you to the scrapli-netconf library! scrapli-netconf is a library that I built on top of my other library scrapli. The TL;DR on scrapli is that it is a telnet and SSH python client that supports synchronous and asyncio with a variety of “transport” plugins.

scrapli-netconf takes advantage of the fact that NETCONF runs in an SSH subsystem, and the fact that several of scrapli's transport plugins support this subsystem. Rather than drone on about scrapli-netconf you can read more about it in the README, and we’ll just jump right to a simple demo:

>>> from scrapli_netconf.driver import NetconfScrape
>>>
>>> my_device = {
...     "host": "172.18.0.11",
...     "auth_username": "vrnetlab",
...     "auth_password": "VR-netlab9",
...     "auth_strict_key": False,
...     "port": 830
... }
>>>
>>> conn = NetconfScrape(**my_device)
>>> conn.open()
>>> dir(conn)
['__annotations__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_build_base_elem', '_build_filters', '_build_readable_datastores', '_build_writeable_datastores', '_host', '_initialization_args', '_parse_server_capabilities', '_pre_commit', '_pre_discard', '_pre_edit_config', '_pre_get', '_pre_get_config', '_pre_lock', '_pre_rpc', '_pre_unlock', '_process_open', '_setup_auth', '_setup_callables', '_setup_comms', '_setup_host', '_setup_keepalive', '_setup_ssh_args', '_setup_timeouts', '_transport', '_transport_factory', '_validate_edit_config_target', '_validate_get_config_target', 'channel', 'channel_args', 'close', 'commit', 'discard', 'edit_config', 'get', 'get_config', 'isalive', 'lock', 'logger', 'message_id', 'netconf_version', 'on_close', 'on_open', 'open', 'readable_datastores', 'rpc', 'server_capabilities', 'strict_datastores', 'strip_namespaces', 'transport', 'transport_args', 'transport_class', 'unlock', 'writeable_datastores']

As you can see above, this is basically the same thing that we showed with ncclient, with the notable difference that we are able to inspect the methods available to us with scrapli-netconf. Functionally this is more or less the same as ncclient, and we can see that we are able to simply fetch the running config with a pretty similar syntax:

>>> cfg = conn.get_config(source="running")
>>> cfg
Response <Success: True>
>>> dir(cfg)
['__bool__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_record_response', '_record_response_netconf_1_0', '_record_response_netconf_1_1', 'channel_input', 'elapsed_time', 'failed', 'failed_when_contains', 'finish_time', 'genie_parse_output', 'genie_platform', 'get_xml_elements', 'host', 'netconf_version', 'raise_for_status', 'raw_result', 'result', 'start_time', 'strip_namespaces', 'textfsm_parse_output', 'textfsm_platform', 'xml_input', 'xml_result']

In the case of scrapli-netconf, however, the returned config object is actually a scrapli Response object – which, if inspected, we can see contains some handy attributes and methods. Note that this Response object is actually a scrapli-netconf Response object, but it inherits from the “core” scrapli Response object (hence some of the methods are not applicable for NETCONF such as textfsm_parse_output).

If we would like to see the string result of our operation we can simply do that:

>>> cfg.result
'<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101"><data><native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native"><version>16.12</version><boot-start-marker/><boot-end-marker/><memory><free><low-watermark><processor>72329</processor></low
<SNIP>

Or we can get the lxml Element object of our result:

>>> cfg.xml_result
<Element {urn:ietf:params:xml:ns:netconf:base:1.0}rpc-reply at 0x7f93981eeb40>

As I personally am not super savvy with XML sometimes I just want to know about what elements live in the XML tree that the device returned without having to look at the result/parse that “manually”, the get_xml_elements method can help with that:

>>> cfg.get_xml_elements()
{'native': <Element {http://cisco.com/ns/yang/Cisco-IOS-XE-native}native at 0x7f93981eea80>, 'licensing': <Element {http://cisco.com/ns/yang/cisco-smart-license}licensing at 0x7f93981eec80>, 'netconf-yang': <Element {http://cisco.com/yang/cisco-self-mgmt}netconf-yang at 0x7f93981eec00>, 'acl': <Element {http://openconfig.net/yang/acl}acl at 0x7f93981eefc0>, 'interfaces': <Element {urn:ietf:params:xml:ns:yang:ietf-interfaces}interfaces at 0x7f9398208280>, 'lldp': <Element {http://openconfig.net/yang/lldp}lldp at 0x7f9398208100>, 'network-instances': <Element {http://openconfig.net/yang/network-instance}network-instances at 0x7f9398208040>, 'nacm': <Element {urn:ietf:params:xml:ns:yang:ietf-netconf-acm}nacm at 0x7f9398208340>, 'routing': <Element {urn:ietf:params:xml:ns:yang:ietf-routing}routing at 0x7f93981eed40>}

scrapli-netconf can of course also be used (just like ncclient) to edit device configurations:

>>> edit_config_filter = """
... <config>
...     <interfaces xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces">
...         <interface>
...             <name>GigabitEthernet1</name>
...             <description>scrapli was here!</description>
...         </interface>
...     </interfaces>
... </config>"""
>>> result = conn.edit_config(config=edit_config_filter, target="running")
>>> result
Response <Success: True>
>>> result.result
'<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="102"><ok/></rpc-reply>'

Note: this is an IOSXE device so no need to commit or anything, but obviously that will depend on your target device type!

In addition to the core NETCONF features that I believe most network engineers will need (get/get-config/edit-config/lock/unlock/commit/discard), I believe that scrapli-netconf has quite a few other benefits:

  • It is built on scrapli and looks and feels very similar (because duh!) – if you are working in an environment that contains devices of all ages you may need SSH and NETCONF to interact with all your devices – you may even need Telnet! With scrapli and scrapli-netconf you can handle all of these connection types with the same code base and the same device setup.
  • Just like scrapli the “standard” transport type for scrapli-netconf is your SSH client (/bin/ssh) – and that means that (just like scrapli) 100% of your normal OpenSSH directives/config file is supported! Have to ProxyJump through nine trillion jump hosts? No problem, just setup your ssh config file like you probably already are doing, and scrapli will work!
  • Asyncio support: If you need asyncio support you need it! As far as I am aware as of time of writing scrapli-netconf is the only asyncio Python NETCONF client!
  • Fully strictly type checked! This helps with IDE auto-completion, and I believe has helped me to write generally better/clearer code.
  • Documentation – I am a zealot for docstrings and READMEs! While the majority of the README efforts have been made for scrapli “core”, the scrapli-netconf README is pretty robust as well, and all methods have accurate Google-style docstrings (and this is linted aggressively to keep me honest!)
  • Tests! Just like scrapli “core”, scrapli-netconf tries to be well tested! There are unit tests of course, but also “functional” tests that are ran against devices running in vrnetlabscrapli “core” README contains a brief write up of how you can setup this test environment yourself if you are so inclined. Regardless of whether you decide to setup the test environment, the tests also help provide some useful example code so you can see things in action (beyond just the examples).

I hope to continue to expand scrapli-netconf to add more functionality, and eventually cover 100% of the NETCONF RFCs (for now I’d say the 80/20 rule applies and that most use cases should be covered by this – but let me know if there is something missing that you are after!), and hopefully in the near term add this to the nornir-scrapli plugin as well!

So if you are looking for a Python NETCONF client, I’d love it if you gave scrapli-netconf a spin, and if you do, let me know on GitHub, Twitter, or Networktocode Slack what you think!

PS: I am especially sensitive to the “shit talking” of libraries now that I am a creator and maintainer of several packages (and even without being a library creator/maintainer I think we should all just be nice to each other!). This post is in no way intended to be disparaging to ncclient or any of its creators/maintainers/contributors. scrapli-netconf is just another implementation that (clearly, because I wrote it) suites my thinking/brain better – pick whichever works for your uses cases, your brain, and your team!!

I’d also like to thank all the folks who have worked on ncclient over the years – I don’t think I know any of them personally, but without the work they did to build ncclient I almost certainly would not have built scrapli-netconf, so, thank you all!