This page last changed on Apr 26, 2011 by kurrazyman.

This page has been created for the intention of discussing the architecture of OpenRemote.

I have some ideas of what I think would be beneficial developments to the way OpenRemote works and would like to share these with the OpenRemote community to gauge opinion and to encourage discussion on the roadmap of the project. Like many others I have functionality that I would like to see incorporated and to facilitate development I think everyone needs to be in agreement as to which way the project should be heading. A lot of what I have written is idea generation and doesn't exist at present (although my poor grammar may suggest otherwise).

System Overview

Panels

  • I feel that the panels would be better if they were 'thinned out' so the controller pushes UI layouts to the panels as the user interacts with it.
  • Panels would connect to the controller so the controller knows exactly what panels are on the system and what state the UI's are in (for large numbers of panels a broadcast approach may be better and panels then decide to ignore/listen to the message).
  • This facilitates user authentication on the panels (ability to restrict certain screens to certain users)
  • If connection is lost panels should resort to an OpenRemote splash screen (with access to settings for host configuration)
    The controller could remember preferences for various panels and load the correct startup screens, screen timeouts, etc (Panel MAC Address for identification)
  • Panel elements need a flexible binding technique (I know this has been discussed in a recent post); ability to change element properties dynamically (for example a music player panel with a current album image, image would change with track changes)

Devices

A device is either a protocol Gateway (TCP/IP <-> Protocol) or it sits underneath another gateway (and thus inherits that gateways protocol, command structure). A gateway device may control multiple endpoint devices (a typical example is an IR to IP gateway which may control your TV, amp, etc or a KNX IP Interface). The controller communicates with the gateway not with the endpoint device.

Endpoint devices would support three types of command (status, control and event) where event commands can be further broken into two types (broadcast or monitor): -

  • Status Command - Returns the status of the device; a query is sent to the endpoint and the result is returned
  • Control Command - Controls the device by sending a control request (success of the command should be verified when possible)
  • Broadcast Event Command - Some protocols/devices support message broadcasting (over UDP or TCP connection) and these commands provide a way of defining these broadcasts
  • Monitor Event Command - Allows events to be generated by monitoring a device status value and when the value changes (or meets defined logic) then the event fires. This would allow functionality that doesn't exist on the device (onVolumeChange, onAlarmActivation, etc.)

These commands would then be exposed to the controller as properties, methods and events) for example: -

Device Command Command Type Controller Command Path
getVolume Status device.status
setVolume Control device.setVolume(args[])
onVolumeChange Broadcast Event device.onVolumeChange
volDropsBelow10 Monitored Event device.volDropsBelow10

This would also allow for devices to be added to the public repository (beehive) so for example; if you added a Keene IR Anywhere module to your system to control a Samsung UE40B6000 LED TV then you could look in the repository for this combination of devices (Samsung TV endpoint device using a Keene IR Anywhere) if someone has submitted it you'll get all of the control, event and status commands that they submitted and you'll be able to use them straight away.

Macros, Commands and Actions

  • Macros are what they are (a sequence of commands) with ability to add pause/sleep (dummy) commands between successive commands for practical limitation reasons.
  • Commands are what the devices expose to the controller (as discussed above); a command can consist of one or more actions and control commands support arguments (so commands can be used by multiple panel buttons rather than having lots of commands doing pretty much the same thing)
  • Actions are the bits that do the work of the command; there are four types of action (Send, Read, Script, Event).

By using actions, complex device interactions can be built but exposed as a single command (for example I am using a PulseAudio telnet server to control a 10 channel sound card to provide me with 10 zone audio functionality; in order to set my zone1 to listen to source1 I have to send multiple telnet commands to the PulseAudio server (mute all zone1 sources individually and then un-mute source1). Action Scripts further enhance this capability and provide the ability for data manipulation and intelligent commands.

Back to the music player example I could have a an MPD device that exposes an artistAndTitle status command this command would then look something like below:-

  • Action 1 = SEND("currentartist")
  • Action 2 = SEND("currenttitle")
  • Action 3 = READ()
  • Action 4 = SCRIPT("combineArtistandTitle.js")

The READ action simply reads the response from the server for the previous two SEND actions and stores it in the commandResult string variable which is then available to the SCRIPT action for processing.

Command Response Format

Have no suggestions on this but a unified approach is needed for all types of commands (status commands will return a value, should this always be a string?, Control commands shouldn't return a value but should confirm success where appropriate).

Controller

The controller would be responsible for informing the gateways which device statuses to poll and also which device events to report back to the controller. The requirements would depend on what screens are displayed on the panels and what logic events exist.

Controller events would come in two forms: -

  • Timed Events - Scheduled events (i.e. at 6am on Mondays play wakeup music in zone7)
  • Logic Events - State dependent events (i.e. if zone1 temperature > 30degC then open the window)

Controller events would fire commands (the same way as panel buttons etc. fire events).

If a panel element is binded to a device status then the controller tells the gateway manager 'I am interested in this command' the gateway manager then tells the corresponding gateway and the gateway will poll that device status and report back value changes (thus creating a monitor event)

Controller scripting possibilities....

Ability to add/remove controller events on-the-fly (if you wanted to set a house wide wake up alarm for 5am the next day, if you wanted to stop an existing scheduled/logic event).


OR Architecture.png (image/png)
OR Architecture.png (image/png)
OR.png (image/png)

Command Response Format

Have no suggestions on this but a unified approach is needed for all types of commands (status commands will return a value, should this always be a string?, Control commands shouldn't return a value but should confirm success where appropriate).

Should not always be a string. Strings only act as an unstructured serialization format of an unknown char sequence.

I've been struggling a bit in locking down where I want to put the event datatype abstraction. On one hand it could be at datatype primitive level (equivalent example here is KNX datatypes – boolean, 8-bit integer, string, etc.) and on the other hand it could be at a higher abstraction level, something equivalent of KNX datapoint types (for boolean datatype primitives there are multiple datapoint types, e.g. ON/OFF, OPEN/CLOSE, ACTIVE/INACTIVE, etc.)

I'm leaning towards datatype primitives (lower abstraction level).

One reason and the use case driving towards the datatype primitives is delaying the data interpretation to event processors – that is, I can create an event type that stores a raw byte[] array and the data payload is completely interpreted by the event processors.

There are some real examples behind this, one of which is interpreting a device data stream and triggering actions based on the stream data. On the other hand, will attempt to leave the event definition extensible to an extent that higher abstractions (Door Open, Door Closed) are possible to support.

The current work-in-progress implementation does the conversion from string serialization format to event late, at the sensor which can be seen here: https://openremote.svn.sourceforge.net/svnroot/openremote/workspace/juha/Controller_EP_SNAP_20110412/src/org/openremote/controller/model/sensor/Sensor.java

Eventually the Event abstraction will be exposed to those who contribute protocol implementations through ReadCommand and Event Listener API's (note currently still Strings as they are going through API migration) – in other words there will be a higher level protocol API to integrate with when creating new protocol integrations.

However, the existing interfaces and abstractions will stay in place (the string-based) just to facilitate migration of existing implementations (protocols can be moved to new API one-by-one) and also to provide a "raw" API to allow protocol integrations to by-pass a lot of the scaffolding (connection management implementations, caching implementations, and the sorts) if/when necessary.

Posted by juha at Apr 27, 2011 12:23

Should also add – this is driving towards some mismatch between the datatypes understood by protocol implementations vs. the datatypes understood by panels. The set of datatypes understood between these two integration points is where the controller sits in the middle and provides an appropriate mapping.

Posted by juha at Apr 27, 2011 12:28

Very good ideas, thanks for sharing it.
If I can feed the discussion, I'll add to point:

  • controller: should have a logging system to keep trace of information coming from devices in the purpose to display historical data/graph. I.E. Temperature, Consumption, ...
  • controller: if controller knows what UI elements are displayed, should be able to avoid requesting unnecessary sensors info. Optimization to reduce bus bandwidth. I.E.: a iPAD panel with several pages. If you've a sensor on 1 page, the controller is requesting the sensor state in a loop, even if the sensor is not displayed.
  • panel: your suggestion is to have it "thin" and controller send UI on the fly, when needed. Ok with the principe, but have to keep the panel very reactive. Can't wait some seconds to have a new page displayed. So, at least, heavy UI should be preloaded I.E.: background images. I really think, to be "user friendly", the panel has to be very performent & reactive.

regards

Posted by yannph at Apr 27, 2011 14:03

controller: should have a logging system to keep trace of information coming from devices in the purpose to display historical data/graph. I.E. Temperature, Consumption, ...

Yes. The logging subsystem is already in place – the logging however is not yet effectively used and will initially be at protocol frame level.

We are however working right now on the first cut of collection and displaying historical data.

controller: if controller knows what UI elements are displayed, should be able to avoid requesting unnecessary sensors info. Optimization to reduce bus bandwidth. I.E.: a iPAD panel with several pages. If you've a sensor on 1 page, the controller is requesting the sensor state in a loop, even if the sensor is not displayed.

This is not quite how it works. The panel's use of sensors is not directly linked to sensors themselves. They're decoupled. So number of sensors on panels does not increase the number of requests sent to devices.

Also in terms of panels, they already only ask data for UI widgets that are being displayed to the user, not all of them.

Finally, the read requests should not be linked to panels in any way regardless – we will be processing events from devices that are not necessarily reflected on panel UI in any way (but handled by event processors).

panel: your suggestion is to have it "thin" and controller send UI on the fly, when needed. Ok with the principe, but have to keep the panel very reactive. Can't wait some seconds to have a new page displayed. So, at least, heavy UI should be preloaded I.E.: background images. I really think, to be "user friendly", the panel has to be very performent & reactive.

True, the model proposed is that exactly of an web browser application and the responsiveness is an issue there (also seen the same feedback on home automation systems that attempted this approach). So preloading, local storage and caching is needed. These will be part of the bag of what is known as HTML5 but it's not quite ready yet.

Posted by juha at Apr 27, 2011 18:01

Juha said:
"This is not quite how it works. The panel's use of sensors is not directly linked to sensors themselves. They're decoupled. So number of sensors on panels does not increase the number of requests sent to devices.
Also in terms of panels, they already only ask data for UI widgets that are being displayed to the user, not all of them."

So I may have misused the designer/controller and iPad Panel. Currently if I 3 pages with, let's say, 3 sensors by page, the controller will request every seconds the value for the 9 sensors, whatever is the displayed page.
So yes, the panel is only requesting for the 3 displayed sensors, but the controller will send READ Status Command for all sensors defined in the panel.
So a suggestion could be to have "Display sensors" and "Event Sensors". "Event Sensor" should be READ constanly (delay to defined by sensor) and "Display Sensors" should be READ by the Controller only when the Panel is displaying them?

It's just to idea to reduce the KNX bandwith as used today in my project.

Regards

Posted by yannph at Apr 27, 2011 18:56

I'm glad that this has generated a little bit of discussion.

In terms of panel responsiveness I don't see bandwidth as a limitation as panels should be on the same LAN as the controller speed of drawing the UI is the key part. HTML5 does offer some useful functionality esp. for pushing events to panels rather than using the current accepted bodges of trying to keep the connection open which isn't very reliable.

Finally, the read requests should not be linked to panels in any way regardless - we will be processing events from devices that are not necessarily reflected on panel UI in any way (but handled by event processors).

I think the controller should be aware of what device statuses the panels and the event systems are interested which then allows the controller to register this interest with the appropriate device threads. So rather than the present situation of all sensors being constantly polled (currently PollingMachineThreads) only the registered device status values need to be polled.

The connection manager (more appropriate term device manager) would provide a proxy for device communication and the individual gateways (device threads) manage the device communication and monitor value changes in interested device statuses, these changes are then pushed back to the controller in a protocol independent manner.

I feel that ring fencing the devices and exposing events, statuses and controls provides a neat way for visualising and building up a system. Most devices have a fixed set of commands they support, all of this can be captured and neatly bundled up and when it comes to interacting with a device (be it on a panel or through a monitor event) then the device command, status or event can be easily referenced through a controller namespace (controller.devices.bedroom1tv.volumeup(1).

Locations could even be introduced into the namespace (controller.zones.bedroom1.tv.volumeup(1)).

Juha is your work in progress branch usable at present?

The work I have done on the device gateways I did to develop functionality I thought the system lacked (and things work well for me) but I think I got stuck in the trap of how the system currently worked and based things on that. Hence I scratched my head for a while and formulated this discussion to get my thoughts down on what I think is a more long term strategy as I have an interest in where the project goes and would like to be involved in improving the software for my needs and others in the community.

This is a bit of a learning process for me and so am glad to have feedback on what I can offer as I do get carried away with ideas sometimes.

Rich

Posted by kurrazyman at Apr 27, 2011 18:59

No you haven't misused things – just that separate concepts are getting mixed up in here.

The controller will poll sensors (when polling sensors are used – necessary for 'passive' devices) regardless of how many sensors you have on your panels. So that is correct and necessary for automation (as opposed to simple remote control).

  1. Whether that request translates to a read request to a device is up to the protocol implementation – a simplistic implementation maps this to a device read request every time (passive device) – the initial KNX prototype was passing read requests through even though it was listening on the bus already.
  2. Protocols that actively broadcast device state can handle this differently – i.e. KNX can listen to bus frames to reduce the KNX bandwidth (btw, this has been done in https://openremote.svn.sourceforge.net/svnroot/openremote/branches/project/Controller_2_0_0_Alphas-KNXIP/ by Olivier – I was merging it back to release branches along with your KNX patches but hit an issue I want to fix before I'm able to complete).

So give Olivier's current work-in-progress (relies on KNX bus listener completely) a try or wait a few days until I have time to get it all together into the release branch.

Posted by juha at Apr 27, 2011 20:25

I think the controller should be aware of what device statuses the panels and the event systems are interested which then allows the controller to register this interest with the appropriate device threads. So rather than the present situation of all sensors being constantly polled (currently PollingMachineThreads) only the registered device status values need to be polled.

It is when it comes to panels but it won't be, not in the first iteration at the very least, when it comes to the event processors. The reason being that the flow of control in general is from sensors into the event processors, and not vice versa.

Many of the event processors are defined declaratively rather than imperatively. Rules in particular have declarative definitions (when something occurs, then perform action) and parsing these definitions for semantics specific to sensor use would require an OR context sensitive parsers which will not be part of the first implementations. The same would apply to each and every scripting language X we'd want to support creating a rather painful overhead for integrating new languages into the system.

The alternative of requiring explicit registrations of which sensors are accessed by event processors is not very appealing either. First, it adds an additional step and a failure point (such declarations must be present) – and lacking up-front parsers, failures to declare use would only be detectable at run-time making it problematic to detect errors for event conditions that occur rarely.

For this reason the flow of control where events are pushed into the event processors rather than having to register specifically for polling makes the system less complex and more reliable. Also worth noting that while polling (which seems to be the real issue) is needed to cover all the use cases, it is not universally needed for protocols such as KNX, UPnP, Zigbee, etc which can actively broadcast device state changes into the controller.

So I do think requiring registration is more trouble than it's worth at this point in time.

Posted by juha at Apr 27, 2011 23:46

Hi Juha,

Points very well made there.

If the sensor is going to push the event into the event system, where is the event logic defined? Would this be included in the sensor definition (if sensor value changes then ....).

Wouldn't you need an explicit event definition to define the event condition and event action (or will it just be a case of on sensor value change do command n and this would be defined within the sensor definition).

The term sensor at present refers to panel output elements so what if you wanted to fire an event for a device status that isn't used on a panel?

I was thinking more along the lines that 'sensors' could be implied from the users requirements and the device definitions.

If for example a TV device through RS232/C to IP gateway exposed the following basic command set: -

Command Type Name
Control On
Control Off
Control Vol Up
Control Vol Down
Control Channel Up
Control Channel Down
Status TV Status
Status Vol

This device doesn't provide any broadcast events (i.e. the TV doesn't send a message indicating the volume has changed or that the channel has changed), but such events could be added to the device definition by the user as monitor event commands: -

Event Name Monitor What Description
OnVolumeChange OnValueChange(Vol) When Vol status changes this event is fired
OnChannelChange OnCommand(Channel Up) OR OnCommand(Channel Down) When user changes channel this event fires

The OnChannelChange monitor event waits for channel up or down control commands as there is no status command for the current channel.

This would mean all of the device logic is encapsulated into the device definition and the device definition in this example would become: -

Command Type Name
Control On
Control Off
Control Vol Up
Control Vol Down
Control Channel Up
Control Channel Down
Status TV Status
Status Vol
Event OnVolumeChange
Event OnChannelChange

Device <-> controller communication could be event based and the devices are almost independent of the controller (almost acting as a view in an MVC system with a bit of hidden business logic inside the device for providing monitor event functionality).

In some basic sense a panel is just another device that communicates over HTTP. Obviously panels are a bit more sophisticated as they're dynamic but the principle is the same.

For me I don't see panels as the ultimate means of home control, I am excited by the possibility of a device such as the XBOX Kinnect for audio/visual control of the system.

Devices that provide broadcast functionality (KNX, UPnP, UDP (IR to IP Gateways etc.)) would just expose broadcast events which avoids the need for manually adding monitor events into the device definition although some may still be desired to increase the functionality (add events that don't currently exist in the device).

If events were driven by sensors then wouldn't that prevent more complex event logic (if the TV switched on and the lights are on then dim the lights). Where as if the event system is a bit more independent this would be possible.

Look forward to hearing your views.

Rich

Posted by kurrazyman at Apr 30, 2011 11:06

If the sensor is going to push the event into the event system, where is the event logic defined? Would this be included in the sensor definition (if sensor value changes then ....).

No, by the event processors – the events themselves encapsulate the event data. The event processors are pluggable and can be chained to act on event data.

See here: Controller 2.0 Event Processing

Wouldn't you need an explicit event definition to define the event condition and event action (or will it just be a case of on sensor value change do command n and this would be defined within the sensor definition).

There will be an event definition – however the event action in this case will be externalized.

The term sensor at present refers to panel output elements so what if you wanted to fire an event for a device status that isn't used on a panel?

Same mechanism applies – sensors were never directly linked to panels but operated through the state cache which the panels consumed. Actions can be triggered by event processors without any knowledge of panels (controller and panels were always decoupled).

If events were driven by sensors then wouldn't that prevent more complex event logic (if the TV switched on and the lights are on then dim the lights). Where as if the event system is a bit more independent this would be possible.

No, actually the opposite – we are flowing all events from all event producers through common event processor chain that is exactly being used for the kind of use case you describe. In a more blue-sky view we approach a blackboard system with a shared knowledge base – one of which is already in place through a an integration of a rule engine as an event processor. This will scale the model up to include pattern recognition (of incoming event flow) and complex event processing.

On the other hand, an event processor may be simply a scripting language (via JSR-223 for example) or a dedicated Java implementation even, such as event logging and upload to support service organization.


The example you used on onChannelChange() is slightly different at abstraction level, and separate effort with regards to current work. I mentioned earlier that at first step sensors are dealing with primitive types (such as boolean vs. door open/close) which are the first primitives to define devices in terms of read and write commands (and in particular the datatypes of read commands). On top of that it is then possible to build an API for common device categories (such as TV) and higher level datatype abstractions.

However, don't want to get into it any deeper yet beyond this "hand-waving" level because first what is needed is to have a proper object models in the controller (commands, sensors and some of the component types that already exist) for event handling, put the flow in place and in use and also make new sensor types (and therefore datatypes) pluggable via an API so they can be extended in a similar way to current commands.

Posted by juha at May 11, 2011 12:15
Document generated by Confluence on Jun 05, 2016 09:31