effective digital transformation

GraphDB for delivering personalized in the moment learning

What is GraphDB and how is it powering the new age eLearning applications?

We’re reimagining a world with technology accelerating continuous lifelong learning, delivering relevant content in moments that matter and in a way that’s most effective in today’s digitally distracted age. In the new world, we’re imagining continuous gathering, structuring & analysis of holistic information of learners with the help of a “Graph database” (which enables better organization & analysis of learner-centric information, than a traditional database) to better profile learners, ultimately delivering unique, personalized “in-the-moment” learning experiences in realtime. Learners can consume context-relevant content on their optic feed using AR technology, as they navigate through the real-world from moment to moment


they can find and connect with people immediately available for demystifying a concept (Peer-to-Peer knowledge sharing).

This technology enabled solution, doesn’t intend to replace existing educational institutions or current learning methods, it intends to encourage self-learning, by spiking curiosity as the learner interacts with the real-world in realtime. Essentially, this solution accelerates continuous lifelong learning, by bringing relevant educational content closer to the learner’s real world experience and integrating technology to enhance the learner’s experience of life in the moment.

Firstly, the solution proposes introduction of GraphDB for organizing ‘learner’ information, enabling more holistic analysis by linking learner data from multiple sources & drawing new correlations from data, to be able to better profile ‘learners’. Secondly, introduction of an AI aided system that delivers personalized content, dynamically assesses impact of that content and progresses his/her learning (enables adaptive learning).

GraphDB led personalized “in-the-moment” learning

This paper aims to elaborate the possibility of technology enabling continuous lifelong learning, delivering personalized content in realtime to enhance the learner’s knowledge and experience of life in the moment. This possibility would require continuous gathering, structuring & analysis of information of learners with the help of a “Graph database” (which has simpler and more expressive data models than those produced using traditional relational or other NoSQL databases) to better profile learners. Learners can consume content on their optic feed using AR technology, as they navigate through the real-world from moment to moment


they can find & connect with people immediately available for demystifying a concept (Peer-to-Peer knowledge sharing).

KEYWORDS: GraphDB | SmartPersonalizedLearning | AdaptiveLearning | AI | AR | P2P
Extended summary

In a recent study conducted with 20 learners between the age 10 – age 18, it was observed that the topics taught in school, were left unaddressed / forgotten until there was an assessment to test the learner’s understanding. This resulted in:

  • Less opportunity for learners to imagine practical applications of the topics taught at moments of high quest to understand,
  • Poor recall ability as the topic is not brought alive in the minds of learners, until the next assessment takes place,
  • Less opportunity to visualize the topic and connect the dots with real life experience.

This paper aims to present the possibility of a learner actively learning as they navigate through their reality, sense, hear and see spaces / objects / interactions in the real world.

This is not just relevant to children studying at school or engineering / medical graduates, it’s relevant to professionals and learners of all ages, throughout life. Learners may also want to learn more about a particular topic, word / phrase / headline instantly as they receive new information from their environment and not at a later point when the context has expired. For example, if the learner is exploring the city of Rome with her peer group and comes across an old historic structure, technology should instantly make available – relevant content / people in the context of her navigation, so as to intensify her experience of life in the moment. This paper aims to present possible wearable technology that’s affordable and can deliver experiences transcending beyond the barriers of language and cultures, creating greater opportunities for society to progress.

This solution may not hinder with the mainstream education, but may provide a supplementary means to access information on the internet, tap into collective knowledge and wisdom from learners across the globe.

1. Target users (personas)

The solution proposed in this paper is relevant for self-learners of all ages, nationalities, backgrounds and cultures with basic proficiency in english and high school science education. However since that’s covers a broad segment of users, for purposes of gaining greater awareness of the problem learners are facing today and gaining clarity on the transformative power of the solution, we’re considering a narrowed set of users, learners of age group 11 – 18 pursuing K-12 education, having basic proficiency in english and ability to understand & communicate in one language.

2. Problem statement: Poor Learning Outcomes – Learner’s Intelligence & Capability

“A primary objective of formal education is to create effective and efficient learning environments. In higher education, the classic learning environment centers on a one-size-fits-all model. Wauters, Desmet and Van den Noortgate (2010) refer to this as a “static” environment, one in which each learner is provided with “the same information in the same structure using the same interface”. The application of technology has facilitated more dynamic learning environments. In fact, the promise of technology has generated a new vision – that of intelligent personalized learning environments that facilitate real-time dynamic mapping and sequencing of instruction to individual learner characteristics.”
– quoted by Meg Coffin Murray and Jorge Pérez in a study Comparing Adaptive Learning to Traditional Learning.

“A critical part of education is also developing students’ ability to interact and work with others – from the teachers who guide them and spark their interests and passions to their peers with whom they work, learn, and teach. Learning is ultimately a social experience, asit builds people into more mature social actors able to participate in civic society and lead productive lives.” – quoted by ​Pearson Publishing, EdSurge on Decoding Adaptive
In our study conducted with 20 learners between the age 10 – age 18, it was observed that the topics taught in school, were left unaddressed / forgotten until there was an assessment to test the learner’s understanding.

The concepts taught remained in memory and not turned to “intelligence”, as the learner doesn’t get to instantly visualize and imagine to form a mental picture of what is taught in the context of its application in the real world

  • In the moment learning doesn’t exist today, as learning is not personalized based on what learner sees, hears, senses in their subjective reality and how they respond / interact with their environment.
  • Opportunities to learn arise outside the classrooms, which the learner isn’t aware of and doesn’t have access to look up information at moments of high need to see an answer or hear an explanation. Learning outside classrooms must be simplified and made engaging with technology as much as learning inside classrooms.
  • Rising gap between information, knowledge, understanding and practice leading to poor concept proficiency levels. This further leads to poor levels of intelligence and capability, resulting in inability of the learner to navigate through life.
  • Academic learning is focused on imparting knowledge that can build IQ, however learner’s needs today are broader which require holistic development of EQ (Emotional Intelligence / Quotient) and SQ (Social Intelligence / Quotient). Integrated Social Emotional Learning (SEL) doesn’t exist today in the current system of learning and SEL can only be holistically enabled when learning happens in response to how the learner is navigating life one moment to another.

3. Proposed solution: Intelligent Mentoring System

“We define digital adaptive learning tools as education technologies that can respond to a student’s interactions in real-time by automatically providing the student with individual support.” – ​Pearson Publishing, EdSurge on Decoding Adaptive
“Educators have long known that learning is improved when instruction is personalized — adapted to individual learning styles. In fact, some argue that advocacy for adaptive instruction dates back to antiquity (Lee & Park, 2008). Modern views of adaptive learning theory, however, are rooted in the work of contemporary educational psychologists. Cronbach (1957) theorized that learning outcomes are based on the interaction between “attributes of person” and treatment variables. He advocated for differentiating instruction (treatment) to a person’s cognitive aptitude. The findings of his early research were inconsistent, leading him to surmise that unidentified interactions existed. His original hypothesis forms the foundation for adaptive learning; he subsequently extended his model to include cognition and personality (Cronbach, 1975). Educators should, he states, “find for each individual the treatment to which he can most easily adapt” (Cronbach, 1957, p.679).” – Meg Coffin Murray and Jorge Pérez on Adaptive Learning Theory

In response to the problem statement defined in the previous section, this paper intends to present a viable, feasible technology enabled solution to introduce personalized “in the moment” learning.

The solution is called “​Intelligent Mentoring System​” will enable in the moment, personalized learning via a learner’s kit which will include:

  • Intelligent Mentoring System – ​AR & Optic feed​: A SmartGlass for projecting context relevant information in the learner’s optic feed, as the learner navigates from one moment to another, senses information from the environment and responds to it. The Glass would also identify objects in the user’s environment and proactively overlay information as the user moves. This technology enables learning as the learner moves outside classrooms, unlike remaining stationary to access a computer.
  • Intelligent Mentoring System – ​Mobile app​:An app that senses words used in the learner’s environment, picks up 20 signals about the learner’s location, movement, interactions & exchanges and makes available content that further demystifies ideas learner was exposed to during the day. The app also shows learners available nearby who were exposed to similar information in their environment and had similar learning patterns (neural pathways) who may have a new insight related to the topic.
  • Intelligent Mentoring System – ​Content | Assessment | Sequence logic​: The app will maintain a history of how the learner consumed content from this solution and will maintain a “learning” history. The app will also have an assessment module to evaluate the learner’s understanding of a concept at the end of each session. For example: When the learner looks up a particular concept, text / images / videos related to that concept are presented so the learner could gain clarity and concept proficiency. At the end of this user session, questions would be asked to the learner, to gauge the level of understanding and the complexity learner could handle to register the impact of learning and proactively intervene to assist learner’s efforts to gain greater proficiency in that concept. This enables Adaptive Learning.

As quoted by Pearson Publishing further supporting the objective of this paper:

“Smarter digital tools that adapt to a learner as they progress through content hold great promise for ensuring that every learner reaches their full potential.”
“Learning styles encompass preference for information type (concrete versus abstract), presentation style (visual, auditory, or kinesthetic) and learning action (active versus reflective). The vast academic literature on learning styles is peppered with few robust experimental studies (Akbulut & Cardak, 2012; Pashler et al., 2008), and the scarce research outcomes are mixed on the effectiveness of adapting instruction to learning style. Studies do consistently demonstrate that students are able to identify their own learning preferences (Pashler et al., 2008) and that adapting learning conditions to these preferences increases student satisfaction (Akbulut & Cardak, 2012).” – Meg Coffin Murray and Jorge Pérez on Adaptive Learning Theory

4. Solution design: ​“​Intelligent Mentoring System​”

“An adaptive learning system can be seen as an expression of an informing system wherein the informer is the instructor, the client is the student, and the rule-based adaptive engine both informs and is informed by interaction with the client.” – Meg Coffin Murray and Jorge Pérez on Adaptive Learning Theory
“Adapting instruction to an individual’s learning style results in better learning outcomes (Pashler, McDaniel, Rohrer, & Bjork, 2008)” – Meg Coffin Murray and Jorge Pérez on Adaptive Learning Theory

The objective of this paper is to present the possibility of “​Personalized, In the moment learning​” in an age of digital distractions and serving content, in a way that’s consumed effectively. This embodies the foundational principles of Adaptive Learning and brings the power of Adaptive Learning, to meet the needs of the learner’s reality as the learner navigates from one moment to another.

The solution “​Intelligent Mentoring System​” proposed above would have 4 core modules, centered around the learner, covering the questions listed below under each:

I. User Interface for the learner​:

1. How the learner will receive context relevant information (content) and instructions from the “Intelligent Mentoring System”

2. How the learner will interact with the “Intelligent Mentoring System”, seek more information or instructions before taking action

3. How the learner will self-assess understanding of a concept and inform the “Intelligent Mentoring System” of learning effectiveness & progression

Assumption​: Learning style preferred is similar to how the learner consumes digital information, how the learner records digital expressions and carries out social interactions online.

“Learning style is the individual preferred behavior in which a learner observes and interacts with the learning environment to obtain knowledge and skills. Learning styles help learners understand their own strengths for more efficient learning (Papanikolaou, Andrew, Bull, & Grigoriadou, 2006). Soloman and Felder (2003) proposed the Index of Learning Style (ILS) questionnaire for evaluating learning styles. The Felder-Silverman theory classifies learning styles into four dimensions: (1) perception: sensitive/intuitive dimension, (2) input: visual/verbal dimension, (3) processing: active/reflective dimension, and (4) understanding: sequential/global dimension (Felder, 1993; Felder & Silverman, 1988).” – Ho-Chuan Huang et al. / Procedia – Social and Behavioral Sciences 64 (2012) 332 – 341

II. User Context Detection​:

1. How would the “Intelligent Mentoring System” continuously receive signals about the learner & the learner’s contextual parameters

2. How would the “Intelligent Mentoring System” recognize the play of various parameters in the learner’s environment & the learner’s degree of need to understand a concept (in sufficient depth) in context of the learner’s reality

3. How would the “Intelligent Mentoring System” intervene with learner’s environment to present content that can possibly influence or alter the learner’s response to the moment.
“The adaptation model is the expression of an instructional strategy defining when and how adaptation occurs. Through an analysis of learner characteristics, associated learning resources are assembled and delivered to the learner.” – Meg Coffin Murray and Jorge Pérez on a Study Comparing Adaptive Learning to Traditional Learning

III. Continuous User Profiling & Content Tailoring​:

1. How would the “Intelligent Mentoring System” continuously receive learner information, organize it, analyse it and predict what content at what level of complexity served in what way, would best meet the learner’s changing needs – next, while he/she is navigating through their reality.

2. How would the “Intelligent Mentoring System” manage learner’s performance as he/she progresses, by picking up contextual signals and making sense of it’s correlation to the learner’s performance in assessments.

3. How would the “Intelligent Mentoring System” diagnose learner’s consistency and analyse the effectiveness of overall learning for altering the complexity levels of the content served accordingly and ensuring each intervention of the “Intelligent Mentoring System” is effective.

“A more sophisticated adaptive learning system adjusts the presentation of instructional materials based on assessment of the user’s understanding of concepts — abstractions or general ideas about what something is or how it works.” – Meg Coffin Murray and Jorge Pérez on a Study Comparing Adaptive Learning to Traditional Learning

“Learner history represents a tool’s ability to use data from a student’s prior performance. If the tool does remember how the student has previously interacted with the content, then this information is added to the data pool and considered during the process of changing a student’s path. Over time, the tool creates a pro le of the learner’s interactions with the content, which continues to grow as the student uses it.” – ​Pearson Publishing, EdSurge on Decoding Adaptive

IV. Learner data and diagnosis​:

1. How would the “Intelligent Mentoring System” using GraphDB will store learner information and multi-dimensional relationships between the learner and the real-world, to be able to consistently analyse large amounts of data and predict relevant content of relevant complexity level or peer group (fellow learners) to be presented to the user.

2. How would the “Intelligent Mentoring System” store, structure & retrieve learning content to be served in a way that’s most effective to meet the needs of the learner’s real-world moment.

3. How would the “Intelligent Mentoring System” store, structure & analyse assessment data to perform an accurate diagnosis of the learner’s strengths and opportunities, subsequently connecting it to the content delivery engine to ensure learning is meaningfully progressed.

“Besides having to use a blended learning model, in which class-time is divvied up between traditional and electronic learning, teachers must be willing to let students progress at their own pace. They need to be comfortable letting software make real decisions about what students should learn next, and use quantitative data on student performance gathered by the software along with their own qualitative gut instincts. They need to be willing to trade the stand-in-the- front-of-the-room-and-lecture model, and instead provide more intimate, personalized instruction to whichever students aren’t on computers at that given moment. Adaptive technology requires a different sort of trust between teacher and student. ‘You have to let go of some of the micromanagement’ says Mark Montero, a teacher.” ​- ​Pearson Publishing, EdSurge on Decoding Adaptive

5. GraphDB architecture

Relational Databases Lack Relationships
“For several decades, developers have tried to accommodate connected, semi- structured datasets inside relational databases. But whereas relational databases were initially designed to codify paper forms and tabular structures—something they do exceedingly well—they struggle when attempting to model the ad hoc, exceptional relationships that crop up in the real world. Ironically, relational databases deal poorly with relationships. Relationships do exist in the vernacular of relational databases, but only at modeling time, as a means of joining tables.

In our discussion of connected data, we mentioned we often need to disambiguate the semantics of the relationships that connect entities, as well as qualify their weight or strength. Relational relations do nothing of the sort. Worse still, as outlier data multiplies, and the overall structure of the dataset becomes more complex and less uniform, the relational model becomes burdened with large join tables, sparsely populated rows, and lots of null-checking logic. The rise in connectedness translates in the relational world into increased joins, which impede performance and make it difficult for us to evolve an existing database in response to changing business needs.” – Ian Robinson, Jim Webber & Emil Eifrem, Graph databases, second edition

GraphDB to best model our reality

Choosing GraphDB for information storage, analysis & retrieval, started with this one fundamental question – ​which database can best model real life, an individual’s reality, social interactions and relationships with the real-world? Relationships are complex in nature, non-linear and multi-dimensional. A 2D table may not be always enough to best represent an individual’s reality. A GraphDB can support multiple layers of information and help navigate through the learner’s real-life information effectively to analyze behaviors and predict opportunities that may be relevant to the individual.

GraphDB, beyond the DB is more of an approach to structure information about people & relationships, that best models our real-world interactions and the ever changing complexity of our exchanges with people, places, things & all of life around us — it is best suited for applications that deal with continuously evolving data sets of users, requiring efficient processing of multi-layered / many-to-many relationships, specially for purposes of personalization.

Social networks like LinkedIn and Facebook are already graphs, as networking is all about connections. There is no point in storing data as tables when a graph model
can best embody the human web — of connections, exchanges, interactions & influence. This model not just helps in avoiding massive computational labour in storing & searching through a large web of information, but also helps in finding new patterns / correlations in data and deriving ground-breaking insights on evolving human behavior.

To implement the proposed solution “Intelligent Mentoring System”, we will need to have 2 Graph databases –


Delivering Personalized Content in realtime

Recommender systems are built to serve highly relevant, context-aware recommendations in realtime to an active consumer. Recommendation systems are widely used in many fields such as electronic commerce, film and video, music, social networking and other fields, such as Facebook, Amazon. It’s worth taking a few minutes to analyze how people in our day to day lives tend to observe us continuously, learn about our preferences and suggest us places / people / things / activities / experiences – such as where we could go next for our vacations, which outlet could we visit to buy our next product trending in fashion, which movies we could avoid, what new food flavours we could explore etc.
They end up offering relevant suggestions which is personalised to suit our needs and personality by gathering information about us both realtime and archived in memory as history, processing the information when they see us, based on their judgement / evaluation they offer recommendations. Let us for purposes of illustration, form a mental picture of the learner in below scenario.

In the middle of a lecture in the classroom, a student asks a doubt to the teacher. The teacher either says “please note / ask doubts in the end” or s/he tries to probe the student to understand the clarification the student needs. S/he empathizes with the learner’s position, visualizes the learner’s difficulty in understanding, analyzes what the student knows from interactions during previous lectures and how the learner perceives concepts (vantage point or learning style); making that information as the backdrop / historical context to profile the learner in realtime, the teacher tries to provide an answer. In most cases, the teacher succeeds in this approach. But in the digital world despite all the advancements in the artificial intelligence and machine learning, AI is not yet able to demonstrate empathy and fully replicate the neural pathways of a conscious human mind. AI is continuously maturing, with human evolution and with technology infrastructure that can support pseudo-human mentoring behaviors, learner’s experience can be enhanced to a great extent.

With a GraphDB for knowledge graph, learning resources are organized & linked to form a large web of content, creating an intense field of intelligence available for learners to access (content organized in the form of a knowledge graph). This opens the possibility of AI to mimic the role of a teacher / mentor, continuously listening to the learner, processing learner profile in realtime based on contextual parameters & learner’s needs in the moment, traversing through the knowledge graph to find the most relevant content, traversing through the learner’s social graph to find people & insights that may be relevant to the moment and delivering the results to meet the learner’s need in the moment.
For example: if a learner is trying to solve a particular question in “definite integrals” but s/he gets stuck at a step that requires knowledge about “differentiation of logarithmic functions” which s/he had either forgotten or skipped (did not learn in the past), the system would recommend content on “differentiation of logarithmic functions” to the learner.

“Recommendation algorithms establish relationships between people and things: other people, products, services, media content—whatever is relevant to the domain in which the recommendation is employed. Relationships are established based on users’ behaviors as they purchase, produce, consume, rate, or review the resources in question. The recommendation engine can then identify resources of interest to a particular individual or group, or individuals and groups likely to have some interest in a particular resource. With the first approach, identifying resources of interest to a specific user, the behavior of the user in question—her purchasing behavior, expressed preferences, and attitudes as expressed in ratings and reviews are correlated with those of other users in order to identify similar users and thereafter the things with which they are connected. The second approach, identifying users and groups for a particular resource, focuses on the characteristics of the resource in question. The engine then identifies similar resources, and the users associated with those resources. As in the social use case, making an effective recommendation depends on understanding the connections between things, as well as the quality and strength of those connections—all of which are best expressed as a property graph.” – Ian Robinson, Jim Webber & Emil Eifrem, Graph databases, second edition

Types of Recommendations and Algorithms

There are plenty of algorithms for building recommendation systems. Broadly they are categorized into three basic areas: collaborative filtering, content-based filtering, and knowledge based filtering.

1. Collaborative Filtering Recommender Systems

This technique tries to predict ratings for unrated items and select items with best ratings. CF recognizes two basic attitudes: “finding the nearest neighbors” and “model based technique”. Collaborative filtering can be done as user–user or item–item filtering. In our solution, user-user CF tries to find similar users by comparing how they rate content and depending on that predict what the current user will like to learn.

2. Content-based Recommender Systems

This technique is based on comparing item’s attributes. This recognizes what content the user is engaging with, what type of content the user prefers to learn. This algorithm can be further customized/utilized not just to strengthen what the learner is already good at but also to keep a balance between what all subjects the user learns.

For example, if a student is highly proficient at mathematics and s/he loves spending time solving mathematical problems then the app on the one hand can make sure that she gets what interests her in ample amount while on the other hand it can also make sure that she spends appropriate time with other subjects which she might be avoiding for overall growth. Content-based algorithms mainly use manually created item annotations and attributes, but it is also possible to use some automatic techniques (for instance color detection, etc.).

3. Knowledge-based Recommender Systems

These systems use user defined preferences and match them to the corresponding items. Such systems face same problems that Collaborative filtering (insufficient amount of data) or Content-based filtering (similar item does not necessarily mean a correct prediction) faces. Knowledge-based systems are suitable for one-time expensive purchases – a computer or car purchase. User preferences from previous purchases may not exist, be insufficient, or very old and outdated. Two basic types of knowledge-based recommender systems are the Case-based And Constraint-based systems.
Case-based recommenders try to retrieve similar items using different types of similarity measures. Constraint-based recommenders use explicitly defined recommendation rules.

More definitions of the GraphDB

“A graph is a structure composed of a set of vertices (i.e.nodes, dots) connected to one another by a set of edges (i.e.links, lines). The concept of a graph has been around since the late 19th century, however, only in recent decades has there been a strong resurgence in both theoretical and applied graph research in mathematics, physics, and computer science. In applied computing, since the late 1960s, the interlinked table structure of the relational database has been the predominant information storage and retrieval model. With the growth of graph/network-based data and the need to efficiently process such data, new data management systems have been developed. In contrast to the index-intensive, set-theoretic operations of relational databases, graph databases make use of index-free, local traversals.” – Marko A. Rodriguez, Peter Neubauer on The Graph Traversal Pattern

“Neo4j version 1.0-b112 is the implementation chosen to represent graph databases. It is open source for all noncommercial uses. It has been in production for over five years. It is quickly becoming one of the foremost graph database systems. According to the Neo4j website, Neo4j is “an em- bedded, disk-based, fully transactional Java persistence engine that stores data structured in graphs rather than in tables”[7]. The developers claim it is exceptionally scalable (several billion nodes on a single machine), has an API that is easy to use, and supports efficient traversals. Neo4j is built using Apache’s Lucene 3 for indexing and search. Lucene is a text search engine, written in Java, geared toward high performance.” – Chad Vicknair, Michael Macias, Zhendong Zhao, Xiaofei Nan, Yixin Chen, Dawn Wilkins on a paper on Comparison of a Graph Database and a Relational Database.

Querying the GraphDB

“Complex queries are the types of questions that you want to ask of your data that are inherently composed of a number of complex join-style operations. These operations, as every database administrator knows, are very expensive operations in relational database systems, because we need to be computing the Cartesian product of the indices of the tables that we are trying to join. That may be okay for one or two joins between two or three tables in a relational database management system, but as you can easily understand, this problem becomes exponentially bigger with every table join that you add. On smaller datasets, it can become an unsolvable problem in a relational system, and this is why complex queries become problematic.

An example of such a complex query would be finding all the restaurants in a certain London neighborhood that serve Indian food, are open on Sundays, and cater for kids. In relational terms, this would mean joining up data from the restaurant table, the food type table, the opening hours table, the caters for table, and the zip-code table holding the London neighborhoods, and then providing an answer. No doubt there are numerous other examples where you would need to do these types of complex queries; this is just a hypothetical one.

In a graph database, ​a join operation will never need to be performed​: all we need to do is to find a starting node in the database (for example, London), usually with an index lookup, and then just use the index-free adjacency characteristic and hop from one node (London) to the next (Restaurant) over its connecting relationships (Restaurant- [LOCATED_IN]->London). Every hop along this path is, in effect, the equivalent of a join operation. Relationships between nodes can therefore also be thought of as an explicitly stored representation of such a join operation.

This is actually one of the key performance characteristics of a graph database: as soon as you grab a starting node, the database will only explore the vicinity of that starting node and will be completely oblivious to anything that is not connected to the starting node. The key performance characteristic that follows from this is that query performance is very independent of the dataset size, because in most graphs, everything is not connected to everything. By the same token, as we will see later, performance will be much more dependent on the size of the result set, and this will also be one of the key things to keep in mind when putting together your persistence architecture.”

Pathfinding queries:

“Another type of query that is extremely well-suited for graph databases is a query where you will be looking to find out how different data elements are related to each other. In other words, finding the paths between different nodes on your graph. The problem with such queries in other database management systems is that you will actually have to understand the structure of the potential paths extremely well. You will have to be able to tell the database how to jump from table to table, so to speak. In a graph database, you can still do that, but typically you won’t. You just tell the database to apply a graph algorithm to a starting point and an endpoint and be done with it. It’s up to the database to figure out if and how these data elements are connected to each other and return the result as a path expression for you to use in your system. The fact that you are able to delegate this to the database is extremely useful, and often leads to unexpected and valuable insights.”
– Jerome Baton, Rik Van Bruggen-Learning Neo4j 3.x-Packt Publishing (2017).pdf

mySQL foreign key approach poses limitations:

Most NoSQL databases store data as disconnected aggregates. This approach is not ideal for the kind of data which is already related. To establish relationship we embed a reference of an aggregate inside the field belonging to another aggregate i.e., identifier of one aggregate is linked to that of other, something that resembles to the foreign keys approach of mySQL. But, this results into a situation where joining aggregates is the only option left to carry out queries and extract insights from the data. This strategy addresses some of the concerns of traditional SQL solutions but still misses on unfolding complex patterns / relationships that exists within the data.

Document model versus Graph:

One fixed perspective versus many perspectives emerging dynamically. Document model allows to store data in schema-free manner and it can be easily represented as a tree which itself is a graph but trees represent only one perspective of your data. In graph approach, unlike document store hierarchy, more than one natural representation of data can emerge dynamically when needed.

Organizing Data in Neo4j GraphDB

In the graph data model we store data in terms of NODES which are connected through RELATIONSHIPS. Both of these act as containers for PROPERTIES that define the nodes and relationships. The last construct of a Neo4j graph database is labels, which is used to create subgraphs within graphs to categorize nodes for faster traversals. We define each of these below for storing academic data:
Nodes: To store entity information, in our case it is individual topics like calculus, trigonometry, macroeconomics etc.

Properties: ​Like a record in the relational database world, properties are stored much like key-value pairs. Both Nodes and Relationships can have properties. Adding properties to relationships further strengthens the quality of a relationship and can be used during
queries/traversals to evaluate the required pattern. In our case, properties for nodes is real-life application field, language of the course material, number of dedicated hours required to learn the topic, grade weightage etc. and properties for relationships is dependency which can have value : “prerequisite or enhances”.

Relationships: These are equivalent to stored and precalculated JOIN operations of traditional database solutions. It connects two nodes to one another explicitly and helps in structuring entities. In our case, Relationships are ParentTopic (the connected node is a prerequisite to understand content of this node, SiblingTopic (the understanding of the topic in the connected node enhances the understanding of present node, FriendTopic (connected node is a similar topic and can be explored if learner is interested).
Labels: These are used for indexing and some schema constraints. Equivalent to providing meta information about the nodes. In our case nodes are labeled as subjects (Mathematics, Science, Commerce, Business, Music etc.)

Performance Analysis of Global queries

On a system with core i3 processor with a RAM of 2GB and 10GB SATA, performance analysis of a global constraint based user lookup was constructed to measure the performance of queries typically issued on databases. The intent of the global query was to characterize the performance of queries requiring inspection of all users in the system. The queries used for MySQL and Neo4j are following:

SELECT count FROM student_node WHERE student_node.age > ? AND student_node.age <!–?; Neo4j : Match (t:teacher)-

[r:teaches {sub:”Security”}]



->(a:area {name:”Security”}) return s;

6. Infrastructure to host this solution

The “Intelligent Mentoring System” is proposed to be developed on NodeJS (a javascript based language) to work seamlessly with GraphDB and deliver personalized content. The personalization engine will be developed on NodeJS on the backend, the APIs of which could be consumed by the user interfaces, developed for the mobile & smartglass (wearable).<br ?–> The “Intelligent Mentoring System” will require a highly scalable, secure and stable cloud server infrastructure (provided by leaders in this industry, like Amazon, Google, Microsoft)
that allows each feature of the system (Eg: Content look up, Peer available nearby) etc to function as an independent service, each one of them not affecting the availability & functioning of the other. GraphDB would be hosted & maintained by the GraphDB provider, pioneers like Neo4j. The application software will be architected using Serverless approach with Microservices architecture, making the system modular and highly available for service at moments that matter.

Serverless Architecture allows coding a Function as a Service (FaaS), also called a Microservice which is making life a lot easier for businesses today, as all you need to do is now code each feature as a Microservice (which the server understands how best to run it), without having to worry about server availability, active server monitoring, management and maintenance.

Benefits of Serverless architecture & microservices approach

1. AUTO-SCALING: What if someone else took care of automatically scaling up or scaling down your server configuration & availability, based on how many people are using your application in realtime? Amazon Lambda, Google Firebase and IBM OpenWhisk intelligently create more and more copies of your features / functions (Microservices) that can be available as services to more and more people as the demand rises. You can customize this auto-scaling behavior by writing your own custom cloud functions!

2. AUTO-HOSTING: All aspects of hosting your application is taken care by your cloud vendor (like Firebase, Lambda or Azure). Developers need not write any server hosting functionalities as vendors already are providing commonly used server functionalities​. Developers can simply invoke these standard functions by making API calls. Also, reg. ​Security​ — it is much simpler for developers to implement security practices and protocols, as directed by the serverless environment.

3. EVENT-DRIVEN: Serverless architecture, fundamentally pushes your business to be demand-driven or event-driven, turning ON / OFF in response to a user event. This questions businesses that are blindly available 24/7 online, even when there’s no demand. Sometimes — Less is More. Reiterating the cost savings — with serverless, your business saves about 60% of your cloud infrastructure costs as you don’t have to pay for server downtime or idle time. In serverless, you are billed only when the memory space reserved for your functions that are in active use (running state) and some amount for the resources it requires to run.

Greater system agility (with Function as a Service)

1. Lesser Deployment Time (Time to Market) — the time required for Packaging and deploying a new functionality or FaaS application (Function as a Service) is significantly less compared to regular self-managed server approach; as serverless obsoletes a lot of tasks (like managing and running core services such as databases and load balancers) in the deployment process by proactively taking care of it.

2. Faster Response to Market Demands — The system can be swift in responding to ever changing market conditions and demand as developers can just tweak the existing code a bit or write some new functionality (if required) and deploy it just with a few API calls without worrying about code integration and delivery i.e., in serverless architecture scalability is possible at function level, unlike other virtualization mechanisms where scalability is only possible at application level.

3. It’s now extremely simple for software engineers to develop, deploy and manage applications running on the cloud. What once costed thousands of dollars to run an app on a cloud server now has dropped down to more than 1/10th the original cost, due to this “virtual, fractional ownership” concept, with an all new “serverless” way of coding the app and simply renting out a server to run it.

7. Business model for viability

The “Intelligent Mentoring System” puts the learner at the center of the universe and continuous personalization of content next, in the process creating an ecosystem that promotes individual and collective progress.
Content delivered to the learner can be learning resources available in the content database or people with relevant insights nearby who may have an answer to the concept sought after. The “Intelligent Mentoring System” creates a strong value proposition for learners, content creators, mentors / explainers / teachers who would participate in the learner’s progress. The solution opens new possibilities and revenue streams from various market participants:

1. Revenue stream #1: Learner subscribes to The “Intelligent Mentoring System” and pays a combination of a one time fixed fee & a monthly recurring fee for the technology. Fixed fee for the Hardware (SmartGlass) and recurring fee for 1) Firmware 2) Software (Mobile app) as these require regular upgrades.

2. Revenue stream #2: Mentor / Content creator ​subscribes to The “Intelligent Mentoring System” and can earn a consulting fee for being listed on the platform as
a mentor and offer to explain / demystify concepts to learners.

3. Revenue stream #3: Schools / Coaching institutions ​may purchase a reseller /
distributor license and may distribute the solution at a marked up price (MRP
decided by the solution provider).

4. Revenue stream #4: Government ​may jointly own the solution along with the
solution provider, becoming the primary source of learning resources available on the platform for consumption.

“The support of heterogeneous mobile devices is important for increasing learning convenience and efficiency in a mobile learning environment. By identifying individual device capabilities, content adaptation provides a solution to the heterogeneity of devices for learners. In an adaptive educational system, content adaptation offers appropriate learning content suited both to the device’s specifications and to the learner’s abilities.” – Ho-Chuan Huanga, Tsui-Ying Wangb,*, Fu-Ming Hsieha on Constructing an Adaptive Mobile Learning System for the Support of Personalized Learning and Device Adaptation

8. Driving large scale adoption

The “Intelligent Mentoring System” is intended to be an open platform, would need to be a joint venture managed best by a group of organizations & governments, to best protect the interest of the learner and aid learners’ progress in life. Organizations like Google with existing knowledge graph could contribute to open-source knowledge, which Government affiliated organizations could contribute to learning content aligned with their respective academic curriculums. Independent organizations like Byjus, Coursera and other online platforms offering additional learning resources may also participate in jointly managing the content on the platform.

Adoption of this solution could be largely driven through mainstream educational institutions and influencers (like coaching institutions), for enriching the learner’s life experience. Government affiliated schools in tier 2 / 3 towns in India could distribute the solution by providing learners with android phones with 3G internet speed, with the software installed. The solution would have to be piloted for one village, for a non english speaking audience, so the accuracy of Google translations may also be tested. Once the model is proven to work in one village, replication measures can be taken for large scale adoption.

“Adaptive learning tools are designed to support an approach to teaching and learning, whereby each student is working on only the skills that he or she needs. However, in order for adaptive learning tools to work successfully in real classrooms, they must be integrated with an appropriate pedagogical model, and an appetite and infrastructure for change at the system level. For example, if a teacher uses curriculum with a strict pacing guide – outlining every objective the teacher must teach each day of the school year with frequent assessments and without flexibility or exceptions – incorporating a tool with an adaptive sequence into the classroom will most likely be unsuccessful. Tools with adaptive sequences allow students to work on any skill at any time, the tool’s approach and the teacher’s approach are in conflict.” – Pearson Publishing, EdSurge on Decoding Adaptive

9. Continuous evolution

The “Intelligent Mentoring System” would be a joint venture by a group of organizations & governments committed to best protect the interest of the learner & his/her progress in life. This new community / organization will have it’s own designers and engineers, who would work closely with the market research team, to do rapid prototyping and iteratively develop the solution. The solution will be owned by this group of organizations, eventually making the platform open source and subsidized. Learners of the platform, would eventually become active contributors to the platform and participate in further grass-root level adoption.

10. Conclusion

The “Intelligent Mentoring System” as a concept is a highly promising solution to learners of this century, as it creates a strong value proposition to the learner, integrating learning with life experience and making learning a lifelong process than a one time destination. The technology available today presents a huge possibility of making content personalized and relevant to be delivered at moments of high impact.

Graph Databases are empowered and best suited to efficiently handle data closest to our realities, interwoven in complex ways. Storing learning resources (content) as well as learners information as graphs would enable us to easily build large scale systems that help knowledge seekers to learn in a more holistic way, which can be a step towards more practical way of consuming knowledge, making learning a natural extension to living life in the moment, as the human brain is structured to interrelate every experience and devise wisdom out of it. In short it is the potential solution to eliminate rote learning, the problem that is plaguing education in India.

Graph databases have been built in a way that they can scale exceptionally well to serve millions of learners with fast growing social graphs (with large numbers of nodes and/or relationships), whereas most relational databases begin to see a performance degradation with the increase in usage. Graph databases are easily transformed to contain many relationships amongst the nodes and also to have many attributes tied to any given node and/or relationship, best suited to model the non-linearity and the multi-dimensionality of learners’ growth. Attempting the same in traditional databases would demand creation of more tables and additional schema to be added to the underlying database, which would negatively impact the system’s performance. Also, Graph has an edge over other NoSQL databases too, as it leverages the relationship among data, revealing unpredictable patterns in data leading to new personalization opportunities, which otherwise would remain unexplored.

The “Intelligent Mentoring System” beyond being a technological marvel would also present a compelling business model that is self-sustainable and this is an opportunity for leaders in the EdTech industry to come forward, join hands and pioneer this, leading the way for many nations to adopt. From a solution architecture perspective, the learner is in touch with the system as they navigate through their realities, either via the smartglass or via the mobile app – the learner would have access either or both of these interfaces.
The year 2019 and forward, would see a leap in AI, Wearable tech in Education and this solution is a potential candidate for governments to choose for implementation to bring forth large scale learning revolution.

11. References

1. Meg Coffin Murray and Jorge PeÌ​rez in a study Comparing Adaptive Learning to Traditional Learning
2. Meg Coffin Murray and Jorge PeÌ​rez on Adaptive Learning Theory
3. Pearson Publishing, EdSurge on Decoding Adaptive
4. Ho-Chuan Huang et al. / Procedia – Social and Behavioral Sciences 64 (2012) 332 – 341
5. Ho-Chuan Huanga, Tsui-Ying Wangb,*, Fu-Ming Hsieha on Constructing an Adaptive
Mobile Learning System for the Support of Personalized Learning and Device Adaptation
6. Marko A. Rodriguez, Peter Neubauer on The Graph Traversal Pattern
7. Chad Vicknair, Michael Macias, Zhendong Zhao, Xiaofei Nan, Yixin Chen, Dawn Wilkins
on a paper on Comparison of a Graph Database and a Relational Database.
8. Jerome Baton, Rik Van Bruggen-Learning Neo4j 3.x-Packt Publishing (2017).pdf
9. Ian Robinson, Jim Webber & Emil Eifrem, Graph databases, second edition
10. Vidhya Abhijith and Nishant Choudhary co-authored insights on Codewave Insights
(​https://insights.codewave.com​) 11. www.google.com​ for search.
12. Bc. Jakub Drdak “Recommendation Algorithms Optimization”, 2018
13. Jan Skrasek, Master’s Thesis “Social Network Recommendation using Graph Databases”,
14. “Design and Implementation of Movie Recommender System Based on Graph Database”,
978-1-5386-4806-3/17 $31.00 © 2017 IEEE DOI 10.1109/WISA.2017.34
15. https://neo4j.com/developer/graph-db-vs-nosql/
16.Reshma K.R, Mary Femy P.F and Surekha Mariam Varghese “Outcome analysis in
academic institutions using Neo4j”, DOI :10.5121/ijcsity.2016.4202 17. https://en.wikipedia.org/wiki/Recommender_system

Leave a Reply

Your email address will not be published. Required fields are marked *

MVVM Design Pattern for iOS and Android
Web Scraping – Make Your Business Data Driven
web scraping use cases

Web Scraping – Make Your Business Data Driven

A few most prominent use cases of the web scraping ( data crawling / data

Subscribe to Codewave Insights