graph view:
InfraNodus
×  ⁝⁝ 
Graph Language Processing Settings:

 
Specify the settings for your text-to-network conversion algorithm for this graph.
Lemmatizer: ?
Every word will be converted to its lemma (e.g. bricks > brick, taken > take) and will be shown on the graph as a node. Set to your language for more precise results. Switch off to turn off lemmatization and add your custom stop words list below.
 
Show on Graph:   Double Brackets [[]]:  Categories and Tags:   
Stop Words: ?
List the words, comma-separated (no spaces), that should not appear in the graph, in addition to your default global stopwords list.
Example: is,the,as,to,in

 
Synonym Nodes: ? unmerge all
If you'd like some words to appear as one node on the graph, in addition to your default global synonyms list, list the synonyms, one per line.
Example:
machine:machine learning
learning:machine learning

 

×  ⁝⁝ 
Dynamic Graph Settings


See the dynamic evolution of this graph: scroll or "play" the text entries to see how the text propagated through the network graph over time.

the final graph

highlight propagation edge
show visible statements only



 
Play the Graph


current speed of the player:
0 2000

one statement at a time


×  ⁝⁝ 
Export the Data


Network Graph Images:

The graph images for publishing on the web or in a journal. For embeds and URLs use the share menu.
PNG (Image)  SVG (Hi-Res)

Visible Statements (Tagged):

Export the currently filtered (visible) statements with all the meta-data tags (topics, sentiment).
CSV (Spreadsheet)   MD (e.g.Obsidian)  

Text Mining Analytics:

Summary of insights from the analytics panel. For specific in-depth exports, see the download button in the Analytics panel.
TXT Analytics Report  CSV Report  N-Grams CSV

Network Graph Data:

The raw data with all the statistics for further analysis in another software.
JSON  CSV  Gexf (Gephi)

All the Text (for backup and duplicating):

Plain text used to create this graph without any meta-data
Download Plain Text (All Statements)
× ⁝⁝ 
Share Graph Image

 
Share a non-interactive image of the graph only, no text:
Download Image Tweet
 
Share Interactive Text Graph

 

 
×  ⁝⁝ 
Save This Graph View:

 

×  ⁝⁝ 
Delete This Graph:

 

×  ⁝⁝ 
Your Project Notes
Interpret graph data, save ideas, AI content, and analytics reports. Add Analytics
Top keywords (global influence):
Top topics (local contexts):
Explore the main topics and terms outlined above or see them in the excerpts from this text below.
See the relevant data in context: click here to show the excerpts from this text that contain these topics below.
Tip: use the form below to save the most relevant keywords for this search query. Or start writing your content and see how it relates to the existing search queries and results.
Tip: here are the keyword queries that people search for but don't actually find in the search results.

MVI_9664

   edit   deselect   + to AI

 

1:52 Welcome to this episode of the context. Today, I want to talk to you how artificial intelligence assistance or moving from the layer of understanding individual components of the infosphere around them and around us to the semantic layer understanding the meaning and the implications in a broader context of what is the information and there's a consequence what is potentially the, Knowledge that we can derive.

   edit   deselect   + to AI

 

2:31 At a higher layer of abstraction. We have many examples of AI assistance that they after they in an increasing number of situations or helping so that we can work better we can communicate better or that our entertainment choices are better corresponding to our expectations of quality for the time that we are investing in each of these acquitted activities.

   edit   deselect   + to AI

 

3:06 And the AI assistance has become in many aspects superhuman. in their performance In the eighties and the nineties we were trying to build artificial intelligence components with the top-down approach. We would carefully craft rules. That put together would resemble the activities and the reasoning of a human expert. However, this approach couldn't scale.

   edit   deselect   + to AI

 

3:44 On one hand, it was difficult to formalize the judgment of an expert who would too often say, oh I'm going with my gut. And insisting in the interview process would not necessarily lead to. Useful increasing number of rules, that could be formally described in a program. On the other hand when it did happen and we increased the number of rules of our expert systems from a few hundred to thousands these became extremely difficult to debug and they became brittle and they could not perform even if we added the rules.

   edit   deselect   + to AI

 

4:34 We couldn't predict their behavior. Already 40 years ago, there were neural networks that would carefully change the weights of certain connections between layers of variables so that given a type of input they could generate certain output. The simplest example of neural networks is recognizing handwritten digits. Where the number four or the number seven or the number eight.

   edit   deselect   + to AI

 

5:17 As written by several people may not be very similar, but still we are pretty good in recognizing yes, that is a four that is a seven, that is an eight. Computers were not good at all but neural networks were applied and they little by little became pretty good in recognizing handwritten numbers as well or and written letters.

   edit   deselect   + to AI

 

5:44 However, it appeared that their performance would plateau and the really couldn't go from the simplest applications to more complex applications. Originally, this was formulated in a in a in an almost joking manner where people would say well computers are not even able to tell dogs or cats apart on a photo.

   edit   deselect   + to AI

 

6:14 As it often the case, what was necessary is an improvement a real innovation in the mathematical approach of the algorithms that we were applying as implemented in the neural networks. And in 2012 this change occurred. There was a contest for recognizing images based on a database of images that anybody could take and both train and test their neural networks performance and before 2012 this test would when run on a neural network pass or fail.

   edit   deselect   + to AI

 

7:05 But on average would be able to recognize not more than 70 80 percent of the images it would fail 20 30 percent of the time it is a huge number of failings, the human performance is over 1995 percent on the same set of images. When the new algorithms have started to be implemented neural network performance on image recognition, very rapidly achieved and then surpassed human performance and today we have image recognition and image classification on computers that is literally superhuman if you are given.

   edit   deselect   + to AI

 

8:00 A thousand images and you are asked is this a horse is this a dog is this a bird is this a bridge is this a tower the descriptors that you would assign. To those images would be wrong. 50 out of a thousand in the case of computers, it may be half of that or even less than half of that.

   edit   deselect   + to AI

 

8:28 One of the earliest practical examples of this can be found in the photo sharing platform called Flickr now owned by Yahoo. On flicker and then more recently on Google photos there are literally hundreds and now thousands of different categories and each of your photos is classified automatically across all of those categories and what that enables you to do is to start typing and say I want all the photos that have people who are smiling on the beach at sunset out of the photos that I stored.

   edit   deselect   + to AI

 

9:16 On Google photos, for example. And I am giving you that example because I know that out of the over 200,000 photos that I store on Google photos this search. Gives back to photos of my children during the summer holiday. They are on the beach smiling at sunset.

   edit   deselect   + to AI

 

9:46 This is pretty remarkable because previously we would be required to manually label the images and classify them ourselves. Not only you may remember if you do that we use to take chemical photos and then we would store them in boxes and we would write summer vacation 1993. Or whatever other number of the year.

   edit   deselect   + to AI

 

10:24 But obviously after putting a photo in a given box it could not belong to any other box. There was no alternative way of classifying the photos. Another example is managing your music collection where whether it was mixtapes or CDs or whatever we wanted to do. We were required to manually select what playlist the song would belong to what mood that represented and today we have tens of millions of songs in apple music or Alexa or other systems.

   edit   deselect   + to AI

 

11:10 And we can select a mood and automatically a playlist is created that corresponds to that mood of songs that based on the history of the songs that we listen or we skip. We will like. And of course Netflix that has a recommendation algorithm for what is the movie that we should watch next that is based on our previous previous ratings of thumbs up or thumbs down and famously.

   edit   deselect   + to AI

 

11:49 Netflix ran a contest where they asked. The teams of developers from all over the world who could download the data set of ratings and and matches against anonymous users of of Netflix to improve the recommendation engine. And the price was a cool million dollars. And the two top teams joining forces were able to achieve that improvement and take the price.

   edit   deselect   + to AI

 

12:25 So these are. Three examples of. AI systems recommendation engines classification engines that we use almost every day and there are many many others. As the information flow that we either receive or generate increases, we need to increase the ambition. Of our AI systems. We need to aim for applying a higher level of understanding of the topics that we are covering in order to be able to own the data to extract knowledge and to be able to act on it, usefully.

   edit   deselect   + to AI

 

13:21 And rapidly. One of the benefits to the. Supporters on patreon of the context is that you receive the transcript of the episodes together with the episode as well. And many people are are grateful for that because of course you can listen to me for half an hour or so but if you have the transcript you can just glance through the text speed reading or or stopping here and there with your your eye what?

   edit   deselect   + to AI

 

14:08 I'm talking about and in that case, you will be able probably to get a fair percentage of at least an understanding of what I'm talking about without spending half an hour. And I have had people writing to me saying that they don't have half an hour to dedicate but they do have the time that they need to glance at the transcript and do it at a match for much faster speed.

   edit   deselect   + to AI

 

14:39 Now the next step that I am implementing with my content production which in the meantime has considerably increased because on top of the weekly context episode. I am now producing four shows that are not necessarily daily but they are quite frequent searching for the question live in the European and American editions on one hand and the Asia Pacific Edition on the other.

   edit   deselect   + to AI

 

15:30 Hand these are live streams at 7pm CT which is 1pm New York Time and 10am California time and then there is an other one which I just mentioned in a time slot that is more compatible with guests joining the live show from Japan, Korea, China Australia New Zealand in that is live at 10am in the morning European time.

   edit   deselect   + to AI

 

15:48 And then on top of that. I also have an Italian show Qual è la Domanda that is at 3pm live and Network Society Ventures Pitching Live, which I mentioned in the previous episode of the context which allows startups to meet investors in a kind of a startup pitch competition of one presenting their project and then receiving a marriage of aggressive questions and, Pointedly critiquing the presentation, but also highlighting what the potential of the project is, so if I were to do this amount every day.

   edit   deselect   + to AI

 

16:39 And you wanted to follow me you would have a really hard task of looking at something between 20 and 25 hours per week of new material. And even the transcription of these. And and we are doing them is a volume that I I don't wish even the most fervent of my fans to have to go through every time.

   edit   deselect   + to AI

 

17:14 I do have fans who are extremely dedicated. I have people who actually annotate episodes and and underline and and highlight and find correlations and and these are extremely valuable activities. And that is not what now we are starting to support with additional artificial intelligence components. There are and there have been systems of topic extraction for a long time and these were typically very expensive tools for intelligence service units or enterprises that had tens of thousands of dollars per month to dedicate to the task, but of course as it happens the power of information tools and the digitization of our activities It is to democratize over time.

   edit   deselect   + to AI

 

MVI_9665

   edit   deselect   + to AI

 

0:00 So that what has been exclusive and very expensive becomes inexpensive and accessible to all this is the process that is part of the approach that singularity and Peter the amount is have been popularizing for a long time they talk about six days of exponential change.

   edit   deselect   + to AI

 

0:23 So the democratization of the access of topic extraction tools now means that with. No money and bigger effort or very little money and with much more user-friendly tools it is possible to start analyzing a given amount of text to highlight correlations concentration of certain types of topics. The absence of correlations or the absence of certain types of topics and many many other queries that can be.

   edit   deselect   + to AI

 

1:10 Both sexually as well as visually analyzed and understood so that very rapidly interesting additional questions can be asked about the corpus that is being analyzed. Now, this is the start of the experience that I am telling you and if you want to follow the experiment with me, you can also check out the tool that I am using it is called Infra Nodus IF INFRA, NODUS.

   edit   deselect   + to AI

 

1:52 Infernodus. And. I still haven't built the complete experience in the tool to tell you whether the value that I'm going to gain and then of course give to all of you is going to be huge or a small amount of value, but of course for me it is also a question of of learning and then applying this learning on how a large amount of.

   edit   deselect   + to AI

 

2:25 Output in my case the video streams can be automatically transcribed and then automatically analyzed. So that the various topics covered can be highlighted and understood. The amount of information that is surrounding us is increasing every day. That is why we need these tools so that we can act on a higher layer of abstraction and we can understand what are the important facts and the important connections between the facts that require our attention and our decision-making.

   edit   deselect   + to AI

 

3:11 These are. Important life-saving world-changing decisions that can be made if the right tools are available. So AI tools are necessary. Without them, we would not be able to act on the amount of information we have and now these tools are available not only exclusively to those who can afford them at the very high cost but they are available to anyone who takes the step of understanding the need of finding the tool of implementing it and experimenting with it and then delivering the value to themselves their community and to others like I am doing and I am showing you as well how.

   edit   deselect   + to AI

 

4:04 To do. As I said this is just the beginning I will give updates further on on how the experiment with this kind of abstractionally layer. AI assistance. Decision making tool is going and I hope you will also enjoy learning about it and I will come with me along the journey, thank you.

   edit   deselect   + to AI

 

I sent it to the rest, thank you.

   edit   deselect   + to AI

 

× ⁝⁝ 
        
Show Nodes with Degree > 0:

0 0

Total Nodes Shown:
 extend

Filter Graphs:


Filter Time Range
from: 0
to: 0


Recalculate Metrics Reset Filters
Show Labels for Nodes > 0 size:

0 0

Default Label Size: 0

0 20



Edges Type:



Layout Type:


 

Reset to Default
semantic variability:
×  ⁝⁝ 
×  ⁝⁝ 
Semantic Variability Score
— modulates diversity of the discourse network  how it works?
The score is calculated based on how modular the structure of the graph is (> 0.4 means the clusters are distinct and separate from one another = multiple perspectives). It also takes into account how the most influential nodes are dispersed among those clusters (higher % = lower concentration of power in a particular cluster).
Actionable Insight:

N/A

We distinguish 4 states of variability in your discourse. We recommend that a well-formed discourse should go through every stage during its evolution (in several iterations).

  1 - (bottom left quadrant) — biased — low variability, low diversity, one central idea (genesis and introduction stage).
  2 - (top right) - focused - medium variability and diversity, several concepts form a cluster (coherent communication stage).
  3 - (bottom right) - diversified — there are several distinct clusters of main ideas present in text, which interact on the global level but maintain specificity (optimization and reflection stage).
  4 - (left top) — dispersed — very high variability — there are disjointed bits and pieces of unrelated ideas, which can be used to construct new ideas (creative reformulation stage).

Read more in the cognitive variability help article.
Generate AI Suggestions
Your Workflow Variability:
 
Shows to what extent you explored all the different states of the graph, from uniform and regular to fractal and complex. Read more in the cognitive variability help article.

You can increase the score by adding content into the graph (your own and AI-generated), as well as removing the nodes from the graph to reveal latent topics and hidden patterns.
Phases to Explore:
AI Suggestions  
×  ⁝⁝ 
     
Main Topical Clusters:

please, add your data to display the stats...
+     full table   ?     Show AI Categories

The topical clusters are comprised of the nodes (words) that tend to co-occur together in the same context (next to each other).

We use a combination of clustering and graph community detection algorithm (Blondel et al based on Louvain) to identify the groups of nodes are more densely connected together than with the rest of the network. They are aligned closer to each other on the graph using the Force Atlas algorithm (Jacomy et al) and are given a distinct color.
Most Influential Elements:
please, add your data to display the stats...
+     Reveal Non-obvious   ?

AI Summarize Graph   AI Article Outline

We use the Jenks elbow cutoff algorithm to select the top prominent nodes that have significantly higher influence than the rest.

Click the Reveal Non-obvious button to remove the most influential words (or the ones you select) from the graph, to see what terms are hiding behind them.

The most influential nodes are either the ones with the highest betweenness centrality — appearing most often on the shortest path between any two randomly chosen nodes (i.e. linking the different distinct communities) — or the ones with the highest degree.
Network Structure:
N/A
?
The network structure indicates the level of its diversity. It is based on the modularity measure (>0.4 for medium, >0.65 for high modularity, measured with Louvain (Blondel et al 2008) community detection algorithm) in combination with the measure of influence distribution (the entropy of the top nodes' distribution among the top clusters), as well as the the percentage of nodes in the top community.


Download: TXT Report  CSV Report  More Options
Discourse Structure Advice:
N/A
Structural Gap Insight
(topics that could be better linked):
N/A
Highlight in Network   ↻ Show Another Gap   ?  
AI: Bridge the Gap   AI: Article Outline
 
A structural gap shows the two distinct communities (clusters of words) in this graph that are important, but not yet connected. That's where the new potential and innovative ideas may reside.

This measure is based on a combination of the graph's connectivity and community structure, selecting the groups of nodes that would either make the graph more connected if it's too dispersed or that would help maintain diversity if it's too connected.

Latent Topical Connectors
(less visible terms that link important topics):
N/A
?   ↻ Undo Selection
AI: Select & Generate Content
These are the latent brokers between the topics: the nodes that have an unusually high rate of influence (betweenness centrality) to their freqency — meaning they may appear not as often as the most influential nodes but they are important narrative shifting points.

These are usually brokers between different clusters / communities of nodes, playing not easily noticed and yet important role in this network, like the "grey cardinals" of sorts.

Emerging Keywords
N/A

Evolution of Topics
(number of occurrences per text segment) ?
The chart shows how the main topics and the most influential keywords evolved over time. X-axis: time period (split into 10% blocks). Y-axis: cumulative number of occurrences.

Drag the slider to see how the narrative evolved over time. Select the checkbox to recalculate the metrics at every step (slower, but more precise).

 
Main Topics
(according to Latent Dirichlet Allocation):
loading...
 ?  

LDA stands for Latent Dirichlet Allocation — it is a topic modelling algorithm based on calculating the maximum probability of the terms' co-occurrence in a particular text or a corpus.

We provide this data for you to be able to estimate the precision of the default InfraNodus topic modeling method based on text network analysis.
Most Influential Words
(main topics and words according to LDA):
loading...

We provide LDA stats for comparison purposes only. It works with English-language texts at the moment. More languages are coming soon, subscribe @noduslabs to be informed.

Sentiment Analysis


positive: | negative: | neutral:
reset filter    ?  

We analyze the sentiment of each statement to see whether it's positive, negative, or neutral. You can filter the statements by sentiment (clicking above) and see what kind of topics correlate with every mood.

The approach is based on AFINN and Emoji Sentiment Ranking

 
Use the Bert AI model for English, Dutch, German, French, Spanish and Italian to get more precise results (slower). Standard model is faster, works for English only, is less precise, and is based on a fixed AFINN dictionary.

Keyword Relations Analysis:

please, select the node(s) on the graph see their connections...
+   ⤓ download CSV   ?

Use this feature to compare contextual word co-occurrences for a group of selected nodes in your discourse. Expand the list by clicking the + button to see all the nodes your selected nodes are connected to. The total influence score is based on betweenness centrality measure. The higher is the number, the more important are the connections in the context of the discourse.
Top Relations in 4-grams
(bidirectional, for directional bigrams see the CSV table below):

⤓ Download   ⤓ Directed Bigrams CSV   ?

The most prominent relations between the nodes that exist in this graph are shown above. We treat the graph as undirected by default. Occurrences shows the number of the times a relationship appears in a 4-gram window. Weight shows the weight of that relation.

As an option, you can also downloaded directed bigrams above, in case the direction of the relations is important (for any application other than language).

Text Statistics:
Word Count Unique Lemmas Characters Lemmas Density
0
0
0
0
Text Network Statistics:
Show Overlapping Nodes Only

⤓ Download as CSV  ⤓ Download an Excel File
Network Structure Insights
 
mind-viral immunity:
N/A
  ?
stucture:
N/A
  ?
The higher is the network's structure diversity and the higher is the alpha in the influence propagation score, the higher is its mind-viral immunity — that is, such network will be more resilient and adaptive than a less diverse one.

In case of a discourse network, high mind-viral immunity means that the text proposes multiple points of view and propagates its influence using both highly influential concepts and smaller, secondary topics.
The higher is the diversity, the more distinct communities (topics) there are in this network, the more likely it will be pluralist.
The network structure indicates the level of its diversity. It is based on the modularity measure (>0.4 for medium, >0.65 for high modularity, measured with Louvain (Blondel et al 2008) community detection algorithm) in combination with the measure of influence distribution (the entropy of the top nodes' distribution among the top clusters), as well as the the percentage of nodes in the top community.

Modularity
0
Influence Distribution
0
%
Topics Nodes in Top Topic Components Nodes in Top Comp
0
0
%
0
0
%
Nodes Av Degree Density Weighed Betweenness
0
0
0
0
 

Narrative Influence Propagation:
  ?
The chart above shows how influence propagates through the network. X-axis: lemma to lemma step (narrative chronology). Y-axis: change of influence.

The more even and rhythmical this propagation is, the stronger is the central idea or agenda (see alpha exponent below ~ 0.5 or less).

The more variability can be seen in the propagation profile, the less is the reliance on the main concepts (agenda), the stronger is the role of secondary topical clusters in the narrative.
propagation dynamics: | alpha exponent: (based on Detrended Fluctuation Analysis of influence) ?   show the chart
We plot the narrative as a time series of influence (using the words' betweenness score). We then apply detrended fluctuation analysis to identify fractality of this time series, plotting the log2 scales (x) to the log2 of accumulated fluctuations (y). If the resulting loglog relation can be approximated on a linear polyfit, there may be a power-law relation in how the influence propagates in this narrative over time (e.g. most of the time non-influential words, occasionally words with a high influence).

Using the alpha exponent of the fit (which is closely related to Hurst exponent)), we can better understand the nature of this relation: uniform (pulsating | alpha <= 0.65), variable (stationary, has long-term correlations | 0.65 < alpha <= 0.85), fractal (adaptive | 0.85 < alpha < 1.15), and complex (non-stationary | alpha >= 1.15).

For maximal diversity, adaptivity, and plurality, the narrative should be close to "fractal" (near-critical state). For fiction, essays, and some forms of poetry — "uniform". Informative texts will often have "variable + stationary" score. The "complex" state is an indicator that the text is always shifting its state.

Degree Distribution:
  calculate & show   ?
(based on kolmogorov-smirnov test) ?   switch to linear
Using this information, you can identify whether the network has scale-free / small-world (long-tail power law distribution) or random (normal, bell-shaped distribution) network properties.

This may be important for understanding the level of resilience and the dynamics of propagation in this network. E.g. scale-free networks with long degree tails are more resilient against random attacks and will propagate information across the whole structure better.
If a power-law is identified, the nodes have preferential attachment (e.g. 20% of nodes tend to get 80% of connections), and the network may be scale-free, which may indicate that it's more resilient and adaptive. Absence of power law may indicate a more equalized distribution of influence.

Kolmogorov-Smirnov test compares the distribution above to the "ideal" power-law ones (^1, ^1.5, ^2) and looks for the best fit. If the value d is below the critical value cr it is a sign that the both distributions are similar.
Please, enter a search query to visualize the difference between what people search for (related queries) and what they actually find (search results):

 
We will build two graphs:
1) Google search results for your query;
2) Related searches for your query (Google's SERP);
Click the Missing Content tab to see the graph that shows the difference between what people search for and what they actually find, indicating the content you could create to fulfil this gap.
Please, enter a search query to discover what else people are searching for (from Google search or AdWords suggestions):

 
We will build a graph of the search phrases related to your query (Google's SERP suggestions).
Find a market niche for a certain product, category, idea or service: what people are looking for but cannot yet find*

 
We will build two graphs:
1) the content that already exists when you make this search query (informational supply);
2) what else people are searching for when they make this query (informational demand);
You can then click the Niche tab to see the difference between the supply and the demand — what people need but do not yet find — the opportunity gap to fulfil.
Please, enter your query to visualize Google search results as a graph, so you can learn more about this topic:

   advanced settings    add data manually
Discover the main topics, recurrent themes, and missing connections in any text or an article:  
Discover the main themes, sentiment, recurrent topics, and hidden connections in open survey responses:  
Discover the main themes, sentiment, recurrent topics, and hidden connections in customer product reviews:  
Enter a search query to analyze the Twitter discourse around this topic (last 7 days):

     advanced settings    add data manually

Enter a topic or a @user to analyze its social network on Twitter:

 advanced settings    add data manually

Sign Up